 Hello and welcome to the Active Inference Livestream. This is the Active Inference Lab and we are here in Active Inference Livestream 18.2 on March 30th, 2021. Welcome everyone to the Active Inference Lab. This is a recorded in an archived live stream so please provide us with feedback so that we can improve on our work. Sorry, the video just changed a little bit. There we go. All backgrounds and perspectives are welcome here and will all be following good video etiquette for live streams. Today we're here in session 18.2 on March 30th, having the follow-up discussion with the authors and other participants on this predictive global neuronal workspace paper. And today the goal is really to follow up on the last week's discussion, continue discussing and learning about this very cool paper and why the authors did what they did and what are the implications for different areas. So today we're just gonna go through some introductions and warm up questions and then we'll be able to walk through the paper and the slides and ask a few questions to the authors and get a few different perspectives. So for the introductions, we'll just go around and introduce ourselves and then pass it to somebody who hasn't spoken yet. I'm Daniel, I'm a postdoc in California and I'll pass it to Alex. Thanks. Hi everyone, I'm Alex Vyatkin. I'm a researcher in systems management school in Moscow, Russia. Also, I'm a co-organizer for Active Inference Lab and I pass it to Steven. Hello, I'm Steven, I'm based in Toronto and I do a lot of work with social topographies and landscapes for a social drama and I will pass it over to Dean. Hi, I'm Dean. I am retired. I'm hanging out in the loft of my cabin, which is not very well lit. I'll pass it to Ryan. Yeah, so I'm Ryan Smith. I'm a investigator at the Laureate Institute for Brain Research in Tulsa, Oklahoma. Yeah, I mainly work on computational modeling of empirical behavior in different psychiatric disorders as well as some theoretical work on emotions and emotional awareness and inter-reception. And I guess I'll send it to Chris. Hi, I'm Christopher White. I'm a PhD student at the University of Cambridge in England and I mostly work on active inference or computation, using active inference or computation models of cognitive control. But a lot of my previous work is kind of focused on visual awareness. So I'll pass it to Adam. Hi, I'm Adam. I'm a postdoc at Johns Hopkins School of Medicine Centre for Psychedelic and Consciousness Research. I'm also a research fellow at the Kinsey Institute and I'm interested in consciousness, pre-will too. Hi, everyone. Cool, does anyone want to raise their hand and give a thought on one of these warm-up questions which are just, what are something they're excited or curious about today or something that they liked or remembered about the paper or last week's discussion to sort of jumpstart where we were at or maybe really quickly at the beginning, Christopher wanted to clear up something. And then we'll kind of go into questions. So maybe everyone could write down a question and then Christopher, go for it. Yeah, sure. So I said two things that were just false. Last time, why I just made mistakes, I kind of wanted to correct them. So the first one I was in specifically referenced, someone asked a question about the thalamus and its involvement in consciousness and I kind of referenced a whole body of work from Matthew Larkham's group. And I think I said the medial dorsal nucleus and it's not, it's central lateral nucleus and thalamus. I think like 99% of people wouldn't care but anyone who knows anything about the thalamus would be pained by that. And the second thing was when I was talking about King and Dahans, like Bayesian model of conscious access. I think I described the model as univariate Gaussian. That's also not correct. It's a multivariate, like Gaussian. Although you can extend the same model to a univariate case. Anyway, it doesn't matter. But I just wanted to kind of correct those two things. Thanks for the correction. So if everyone could raise their hand and first we'll go with Adam and then Stephen. I was curious about what Christus just said. So like medial dorsal, that would be what thalamus for interior singlet in the frontal lobes. Kind of like the gateway to like amygdala at all. Would that be right? And then what, so the central lateral, like who's that looping with? I forget. I think central lateral would be looping. I was specifically thinking about like a loop between layer five pyramidal neurons and central lateral thalamus. So if I could ask you a question there, just why are we identifying brain regions? Or what would it mean to be identifying brain regions one way or the other in humans or some other species? So this particular body of work, not to go too far on a tangent, but there's this really impressive body of work that basically has identified both level of consciousness and content of consciousness with burst mode, firing in layer five of cortex. I think most of this evidence is from a mouse model though. Interpret that as you will. Yeah, Adam, or you can use the actual jizzies. Very soon. But yeah, go on. I mean, for me, it would partially be what systems might contribute to wakefulness or different aspects of consciousness, but how we wanna carve the joints on it in different respects. So let's say stimulating the medial dorsal was like more important. Well, then that might implicate like the specific relays. It's ability to form phylamocortical loops or phylamostralocortical like the whole, what it helps them to bind together functionally and other functions. It would tell you what processes might be involved. So maybe action or salience that might be important or maybe it's like frontal lobes as hub but like which part of the frontal lobes as are the parts that are like a general topologically central hub or does it look kind of like a motor bits or does it look like something else? And so it'd be trying to look for like fish around for clues about what's important in what ways but it might mean, but it's easy to tell stories. Okay, cool. So Stephen and then anyone else who has a question. So yeah, I'm just really excited about the, when I listened last week you talked about temperature being used in the models and like you had different places where you used different amounts of temperature in I think it's in the soft Mac function to show different types of model scenarios. So I'll just be curious. I'm just excited about maybe just the general way that you use your thinking to build the models to inform how you start to look at consciousness and the way the brain works as much as the way that you find the results from the models. So that I'd be interested in that those sort of two sides. So I mean, one thing I just want to make sure this is clear is that, you know like computational neuroscience borrows or has been historically ended up borrowing a lot from terminology and physics and engineering and things like that. You know, so there's some of these sorts of terms that can be a little funny, right? So like temperature, like a temperature parameter or typically an inverse temperature parameter there's something that's borrowed from physics that just kind of has to do with in the physics sense kind of how much, you know with how much energy particles are kind of bouncing around against each other and, you know, in a very oversimplified way of putting it but so it just means essentially there's more kind of randomness, right, in the system. So in the context of this, you're just applying that same equation to describe essentially the amount of noise in decision making. So a higher temperature in this case just is just a parameter in an equation that controls how deterministic versus the past decision making processes are. So I just wanna make sure that it has, I just wanna make sure it's clear to anyone listening that it has nothing to do with actual like physical temperature, like being hot or cold. It's just a parameter that kind of modulates how random decision making is. So I just wanted to make that clear. So in this case, it's just controlling. So for example, if the temperature parameter is really low or the inverse temperature has a really high value, then that just means that say, even if the model was only like 51% versus 49% confident in one thing over another, it would always choose the action associated with the 51%, whereas if the temperature value, temperature parameter value was high or as low or as high or the inverse temperature parameter is low, then that means that the like, for instance, that with 51% of the time, it would choose the action associated with 51% proper with 0.51 probability. It would choose that one less often than it would if the probability was a 0.7 and less probability than that than if it was, if the probability was 0.9. So essentially, just having a low inverse temperature parameter value just allows the probability of choice, the actual choices, the actual frequency of choice to kind of scale in some sense with the actual probabilistic beliefs over the different states in the model. Thank you, Ryan. We'll have Christopher, then Adam and then anyone else who raises their hand. So just to kind of contextualize that a little bit. So the softmax function is exactly the same function as a Boltzmann distribution from statistical physics. So often if you have a, you'll just use the same terminology because it has, and in the physics case, it has like a literal physical interpretation. In this case, it doesn't. So that's kind of why there's that overlapping terminology. In terms of deciding why or on those parameters or actually building them in, it was really just kind of came from thinking about what attention does. So what we were trying to modulate there was attention and signal strength. And both of those things, in some way modulate the strength of feed forward input. So when you think about that, then kind of think, okay, what part of the model corresponds to that? Well, it's the likelihood matrix that maps, but that decides the precision of the mapping between external observations and first level hidden states. And so both the idea is, is that both attention and kind of external signal strength, or where it might be something like contrast, whatever, will jointly determine the precision of that mapping. And then there's a lot of different ways where you can kind of cash that out if you actually want to get into kind of the neurobiology of it, like the normalization model of attention or bias competition or something like that. Kind of extracting away from that detail, basically it just modulates the A matrix, the computational level analysis. So that was kind of the thinking that went into it. Very interesting. Adam, then Stephen, then anyone else? Okay, Adam, can you be ready to speak, but go for it? Yeah, sorry about that. It's kind of off the current topic, but from before. So some quick Googling turned up the central lateral seems to be looping largely with parietal and temporal association areas. So that's potentially interesting in ways that indirectly could speak to debates on the physical substrates of consciousness, like the brouhaha between IIT and GNWT said more like the front of the brain or the back of the brain and in which senses do they mean consciousness? But it's interesting, actually I'm having trouble finding an image of like a diagram of the thalamus actually shows the central lateral nucleus. It's all just like, I see central medial and not central lateral. It seems like they might be kind of like tucking it in with the pulvinar, whatever that means. Stephen? Yeah, I just say thanks for that feedback that was really useful. And I think also, because I've got a background in chemistry, I keep thinking about the Gibbs free energy where it talks about temperature times change in entropy in that term. So it's like that the temperature is multiplied into the amount of entropy change there is to make it more significant. So that would kind of mirror why it's a bit like you're putting in noise or entropy. So that's quite useful. So yeah, thanks. Yeah, welcome, Blue. You or anyone else can raise their hand but one chemistry metaphor. And again, Ryan, as you brought up, it's not always like a literal meaning but thermodynamic versus the kinetic alternative for reaction. So like the thermodynamic outcome for the candle is to burn because it has all this pent up energy but it doesn't spontaneously combust because the kinetic barrier, the activation barrier is too high. And then when there's like an enzyme or a spark of energy then in the right situation, the product can go all the way down the big Delta G to the kind of ultimate thermodynamic least energy versus the more kinetically accessible. And so that's kind of like information foraging. They're sort of the perfect citation you would find if you had the right surge and it was that low temperature or something analogous but then if it's too kinetically constrained then it's the attention is drawn away. I'm not sure if that will map directly in this whole active inference mode but it's just kind of another chemical informational distinction. Well, I mean, it depends. I mean, it depends how you're using it. Like in this case, it's not really much of a connection because we're just, essentially we're just using a temperature parameter and a softmax function to control the precision of a mapping well in a few different places. For attention and for stimulus strength we're just using it to change the precision of the mapping between states and observations at the first level. So it just controls essentially how strong a sensory signal ends up updating beliefs post-series over states at the first level and then modulated by a second temperature parameter that does the same thing but that accounts for attention. So these sorts of things jointly control essentially how much evidence sensory input provides for post-series over states at the first level. We're using it for that kind of thing and then we also use it for controlling deterministicness or the level of choice at the second level of the model in terms of action selection. So in all those cases, I don't personally see a direct relationship between that and the kind of thermodynamic aspect of temperature getting over energy barriers and things like that. But I mean, I could be just not thinking through it all the way. Steven, then anyone else? Yeah, I think that makes a lot of sense what you're saying. And I suppose the nice thing with the modeling is that you've got this potential to, I know you're using it like you say as this way to bring in noise or softmax but it gives you that way to increase and decrease it which in real life when you do empirical work it's hard to do that. So it's one of the strengths of the modeling that you can change that parameter and see, it seems to be one way that you really get insights into the dynamics of the system. Yeah, I mean in terms of just being able to modulate the or mimic the influences of attention on modulating the apparent perception of the signal. Do you have Steven or Dean and then Steven? Yeah, I just have a quick question here. And maybe this is far too oversimplified but when you guys were doing this work I started also looking at the effect paper that I think the two together gave me a better sense of some of the work that I was doing around helping people walk into these novel situations pre-reflectively and there is a tendency to want to commit to want to pay attention to want to give over whatever you have in terms of processing power to figuring out what's going on but we had to spend quite a bit of time on helping people get the idea that there is this channel there's this kind of sweet spot and when you were talking about degrees it brought it up for me again the idea of how to help a person know when they're over committing when they're over correcting and when they're over compensating so when they're trying to do this estimation how did they know that they're not seeing the moon walking bare because they're so committed to counting up the number of basketball passes I wonder if you guys had spent a little bit of time looking at the other end of the channel when people can actually readjust back in and I know that's probably where the degrees part comes in but I was just wondering about that So I mean are you imagining are you imagining in some kind of like clinical scenario where a person feels like biased attention away from something that could be say aversive if they were to focus on it? Yeah exactly Yeah so I mean there's probably a mechanistic relation in a sense just in the broad sense of how we think attention is implemented Right so for example if you think that a person is avoiding paying attention to something because it would be aversive then you can end up with this kind of implicit reinforcement of attentional policy selection which we don't directly have in this model the agent is choosing what the precision is associated with attention in this model but that's something that can be added and it's something that we have actually added in some more recent work that we're doing right now but the idea there or one way to think about that would be that whenever a person essentially starts to pay attention to the thing that would be aversive then they start to feel it starts to feel aversive right so then if they pay attention away again then the choice of paying attention away actually ends up being negatively reinforcing because the bad feeling goes away so then what that ends up doing at least according to this kind of reinforcement learning account is that that ends up assigning higher value to the policy of attending away so over time the probability of attending away just ends up getting higher and higher because it makes you feel better every time you do so then this sort of avoidant attention can become really kind of habitized and that's one way of thinking about it and so in the clinical setting you wanna help people potentially start to gradually expose themselves to pay attention to this thing that's aversive in hopes that if they're able to kind of do that in a sustained way then over the long run the aversiveness will go down as they kind of learn to better deal with the thing that they've been avoiding intentionally so that's fairly tangential too the kind of thing we're trying to do here but there's a connection in this very kind of broad sense in that if you can control attention and choices of what to pay attention to can be reinforced then that's one way you can kind of end up with this kind of thing where aversives might not even know that they've developed this strong habit of not to pay attention to something it just becomes very automated the same way any other behavior can be habitized via repeated reinforcement but here just it would be a kind of a cognitive action as opposed to a more overt behavioral action. Very interesting, thanks for that question and response Adam and then anyone else who raises their hand? I guess to come out of left field and what you were just describing it seems like that could have some relation to maybe grounding like psychodynamic like concepts in terms of if these patterns of attentional approach and avoidance can move from your C matrix to your E matrix you can end up finding yourself in these sort of like garden paths of attending and coming to inferences for reasons that you might not even understand just on the way your policy selections or attention have been shaped for conditions that might be too numerous and varied for you to track and so yeah. Yeah, I mean, we've written a few papers on this or I have with some colleagues I mean, again, my focus is largely in terms of emotion and take after conditions, right? And I wouldn't, I would say it's a way of recasting the sorts of behaviors that psychodynamic approaches are trying to capture but to do it in a way that doesn't actually posit the existence of these sorts of repressive, right? Like unconscious, like there's no unconscious agents, right? There's no id that's being posited that's sitting there keeping these things out of awareness but somehow they're still present unconsciously, right? There's nothing like that, right? It's just a very simple story of having reinforced patterns of attention that, you know, so it's a way of capturing the phenomena that I think psychodynamic approaches are often trying to capture but it doesn't posit a lot of the kind of like ontological a lot of the entities that are posited in psychodynamic approaches. Cool. Stephen, Ben, anyone else want to raise your hand? Adam, go for it. Yeah, it's like, yeah, I'm not saying like you're Freudian but like, yeah, you're actually tip-binding like the baby in the bath water, rescuing it, cleaning it up, contextualizing it. Like, why did people have these intuitions to begin with? Like, what was the grains of truth? I'm wondering if through this, this could lead to like this kind of patterns of attending some of what you're not tracking this could be like an account of what would lead to potentially mismatch or discrepancies between things like stated and revealed preferences or a thing like your actual affective disposition with respect to thing and then what you think is going on and then those could potentially go across purposes in ways that could be difficult to explicitly model and potentially have like problematic sequela or not. But I'm wondering if like this could lead to like a mismatch or across purposes of attracting states at different levels. I mean, I probably need more concrete examples to try to figure that out. I mean, that's probably something I'd have to think about more. I guess one quickly would be like implicit affect tests or something like that. Like you say like, I'm not racist but then it turns out, no, no, you are. And like if you actually look at the reaction or something like that or I am really happy in this relationship, then like I measure your heart rate variability and it's crap or things like that, just this, your actual affective state might end up being different than what you think it is and there could be like a lack of self-awareness, like a difficulty coming to coherent modeling of yourself because of these like shaped paths of attention. Like you might like miss the mark about you because it was like difficult to see, something like that. I mean, I would think that, you know, I mean, perhaps there are sort of some instances of the sort of thing you're talking about to be captured by the kind of mechanism that we're talking about here or the kind of tangential extension of them that we're talking about here. But I'm not confident that all of them could, I imagine there's a number of mechanisms that you might be able to describe some of those things like, you know, implicit. Yeah, like, I mean, if you're talking about like implicit associations on like an IAT, right, or something like that, that's probably, my intuition is that's probably different from, you know, it's more kind of like a, like a, like a, right, which I know Chris has actually been working on models of strip tasks. But right, which is probably pretty different from some of the other examples that you were giving. But yeah, I mean, like I said, I'd have to, taking a particular example, I'd probably have to think it through a little more to see, think about what sorts of, what kind of generative model would be able to reproduce the kind of behavior you're talking about? Or, or, you know, most likely there would be multiple generative models that can reproduce it. And then you'd have to do some kind of experiment, right, to do model comparison and see which one actually best accounts for it. But again, I wouldn't feel really confident, you know, saying what that might look like, you know, just in the very kind of abstract general, you know, description we're talking about right now. Nice. Nice. Thank you for the response, Stephen. Yeah, just bringing this sort of, this point with this model that's now sort of maybe getting into the modeling part is I think that's interesting with the awareness base. There's a lot of work with awareness base systems change and looking at awareness. And I like the, I'd be interested in your thoughts on how we can move beyond being trapped in a psychology of the individual and, you know, pathology of the individual. And it moves into the intersubjective multi-scale. So like you talk about tasks, but you, it also could be used in task scapes, you know, organizational contexts or groups of people. So I just think that there's, because it's not nailed down to like a psychological model of the person's thoughts and feelings, but more a general model of the attentional structure. It has like a, you know, ability to move beyond between scales and context. So I just think I've put that out there as something that seems quite interesting. Yes. There's more. I mean, there's more to, oh, right, go ahead. Um, so I think the utility of models like these is to identify a very specific phenomenon. And that, namely having conscious access to some content and not others. So I have conscious access to what's in front of me, my visual field at the moment. If I direct my attention around my body, I can bring up all sorts of like, I can suddenly become aware of different kinds of smarter sensory bits of information. And all of these have very specific kinds of neural signatures, as it were. What we're trying to do is really propose a model at a computational level of analysis and algorithmic level of analysis that explains these. And outside of that, I'm not sure to what extent actually talking about awareness, into subjective awareness, is it all the same thing as talking about awareness as someone who works in like neuroscience of consciousness, for example. They might be, but I don't see any evidence that they are. Pretty interesting, because it's in the title of the paper. So many people might think of it differently. I mean, it's just an interesting question, right? Sorry, what's in the title of the paper? Just the idea that the visual consciousness is involved? Well, but I mean, but again, I mean, this is a model of a particular visual consciousness path, right? I mean, so it's meant to account for things that are seen in the visual consciousness literature. There is, the structure is abstract enough that we would, and we say this in the discussion, that it ought to apply to cases of having access to one bit of information versus another in other domains, right? Like in the auditory domain or something that I've been working on with a bunch of colleagues at a library where I am is trying to see if the same sort of thing looks like it's present in the inter-receptive domain, for example. So I think it's right to say that we would hope anyway that it's abstract enough that it can account for any piece of information that you either gain access to or not when it's being represented so much, right? So whether it's, right now, as I'm talking to you, I'm not necessarily aware of what my heart's doing, but now I choose to pay attention to it and all of a sudden, okay, I can kind of feel that it's beating at a certain rate. I believe that I feel that I wasn't conscious of that before despite the fact that my brain was unconsciously tracking what my heart was doing that whole time, right? So it could apply to being aware or not aware of any stimulus. So I mean, only to the extent that, when we say something about inter-receptive awareness, only to the extent that that means something about a piece of information that, before I was attending to it, I didn't know was there, but now that I'm paying attention to it that I do. So for example, like if I happen to notice that a person's smiling or something like that, even though they were smiling at me the whole time, maybe my brain had picked up on the fact that they were smiling, but I wasn't aware of it until I focused my attention on it or something like that, right? Or even maybe more abstract than that, when my brain sort of had unconsciously detected the smile, there might even have been some further inference that that means something about them being happy, right? So there might even be a case where I either do or do not gain access to this unconscious inference or this posterior belief that the person is more likely happy than unhappy or something like that, right? But I mean, these are sorts of cases that would have to be tested in their own, in their own right. My broad point is just that it's an abstract structure where all it is meant to account for is when your brain is representing something and you either become aware of it or not. I mean, if I could just kind of build on that slightly. So I think one, it's very tempting when we've got computational models that are relative when we've got lots of equations sitting in front of us to kind of think that, I think it's easy to get an illusion of rigor where there is none. The thing that makes these models meaningful is that we can, we're tracking some real world phenomenon. In terms of the abstract model structure, it may be the case that say you have group dynamics where there's some group, some aspect of that, maybe the individual behavior of individuals within a group evolves at a really quick time scale and then the decision, group decision-making evolves at a slower time scale. You may be able, you could in principle fit a two-level partial observable Markov decision process to this as a model of that behavior. But that would be very, very, I think even in that case, even though we're fitting the same model, I don't think that those are at all the same phenomenon although there may be similarities across them. I would be extremely resistant to calling one visual awareness or the group has visual awareness. Interesting and of course a parallel debate in the ant world. Is there a colony awareness or a single six-legged ant awareness or how should we think about distributed systems and aren't we all kind of just distributed systems all the way down? So Adam and then Stephen. So this will be related to, I guess, different types of interest subjective or introspective awareness that you might believe is a model for. I recently uploaded a pre-print after some conversations with people from the Brain Institute at Chapman University, basically trying to do the beginnings of like an active inference account of LeBae phenomena and readiness potentials that are a little bit less deflationary with respect to volition and conscious causation. And so like in like essence, I was thinking of like the actual moving your hand being like an accumulation of model evidence with respect to appropriate receptive pose and it might be some sort of like conjunction of like also like whether or not your like will or feeling of urge was part of it, you could think of that as like an inference over your affective state and that if these things are in what kind of alignments could help to explain whether you're like aware of readiness potential activity or whether this awareness is actually feeding back and contributing to it. And so I feel like your model could potentially like provide like a very precise contextualization of this really settle what's going on then I would like to collaborate. And so I mean, just again, Fred, just to make it really maybe that could be the case, I have no idea but it could only potentially be the case if what you're talking about, right? Like what you're motivated to do or what you're in the process of being about to do or whatever aspect of action or action tendency or anything like that. The only way this could apply is if you think that information about what you're motivated to do or something like that, whatever it is only if it's the case that you think that is a represented piece of information that you can attend to or not and then you could sort of gait into the higher kind of deeper temporal level of the model that would inform verbal reporting and sort of other goal-directed uses. So it would only be the case, I guess my point is this is if the representation associated with action that you're talking about is in some sense formerly equivalent to a representation of something visual that you can attend to. That makes sense. So the preprint is like a three page more of a promissory note. But if I could like pick your brain later about if you think there's like a way we could actually get some like empirical traction and like nailing down the specifics along the lines you're saying. So if you have a task and that task involves people paying attention to the aspect of action that you're talking about, then in principle you could have people, you could have people in some sense cue them or prime them or something like that that engages that sort of action bias or action motivation. And again, you can either attend to it or not or report on it or not. In that case, yes, but otherwise probably not. It's a little tricky I think in terms of like the action that would be attended to would be of a couple of varieties, but none of them few of them would be directly observable. So it's like, you could say like it's the action of the neural activity in terms of like the ramping activity. Like do they have access to that? I was thinking it could also be like the action of like mental acts and like rehearsing, like imagining in a kind of sophisticated affective inference sense, like am I gonna move my hand or not? What would that be like? And then that being contributing to that signal. But, and where like actual awareness of this like fictive action could be part of what would contribute to whether or not you feel ownership of it. Like it's interesting, like some people don't show readiness potentials, they just act. Other people do like schizophrenic patients, they tend not to show readiness potentials before they spontaneously raise their hand. But it would be hard to get this, the action is internal. And so I don't know what you would do with that. Yeah, it just seems like you have to have some kind of behavioral readout that can be fit. To a model of this, to be able to say that this explains the ability to consciously report on it or not, right? So I guess, could you use like, right before like leading to like the action, there tends to be like, it starts out like more bilateral and then the rampective sense tends to go lateral and then the hand moves. Could you use like that sort of signal as the readout? Like the ensemble activity? So in principle you can use, you know, you can fit models to neural activity. I mean, you can fit models to anything that the generative model generates. But I'm not necessarily all that confident that you'd be able to justify the claim that because the model in one case generates like a readiness potential signal that that has anything to do with having conscious access to something. Any of the other problems? Like if you're using like the ensemble activity, it's still the interpretation like awareness of what? What are we even talking about with the awareness? The whole point is to try to like contextualize what those signals mean. And so, yeah. Yeah, I mean, in the majority of cases, you know, either directly or indirectly, you know, like verbal report, right? It's still kind of cold standard for whether someone has access to something, right? Even in no report paradigms, you're still later, right? Asking them to report what they were experiencing at the time. So it's, I mean, and yeah, the issue is that in some sense all kind of supportable or accepted sort of evidence for being consciously aware of something is always ultimately. I mean, Chris, you can correct me if I think this is wrong, but I mean, I think that it always ultimately comes down to verbal report or some reliable correlate of verbal report, whether after the time or. I agree with that. So I think I'm not an expert in no report paradigms, but to my knowledge, all of them, the reason we trust no report paradigms is because our measure, whether that is a particular mode of eye tracking or it is just like incidental memory afterwards, all of those things correlate really, really well with report and that's the thing that kind of like makes us trust them as measures of conscious access, independent of report. Yeah, I think it's always reports are foundational unless we have really, really good evidence. Otherwise, I think we have to take them kind of at face value as a point of fact. Cool, we'll return to the stack. So blue and then Stephen and then anyone else who raises their hands. So Adam always brings me into this like metaphysical like thoughts because whenever he talks and I just was wondering, you know, in terms of this like visual model, there's you hear the expression that people see what they want to see, right? And so have you thought about maybe testing like, you know, like there's the idea of paying attention to something. And so we thought about testing like whether, you know, people can deceive themselves in this visual process, right? Like some self deception or even like the, you hear of like the power of manifestation. Like can people bring into their awareness something because they simply want it to be there? I mean, have you thought about this or testing the model in this way? Yeah, I mean, if you go to figure, maybe scroll down, is it? Which one? So we have a, actually, I'm not sure. So that's the ERPs. Yeah, the tax, the extended taxonomy. So we start with this kind of four-way taxonomy between which Dahan proposed along with some other colleagues of the kind of factors underlying whether you'll have conscious access to a stimulus and you kind of broke it down into whether attention was weak or absent, present or absent or whether kind of the South strong signal strength was with the streak weak or strong. And then we extended this by adding in prior expectations. So the extent to which you expect a stimulus to appear. And there is now like one of the big motivations this work is that there is an enormous body of work now showing that expectation does play a fairly fundamental role, I think in determining the contents of visual awareness. So as for whether you can voluntarily bring something into awareness, I'm not, the answer is, I think that's more to do with attention, I suspect, but say, for example, if you are looking at a bistable image, you can attend to one part of that image and then have the, if it's a, I think it's called a necker cube, you can have that cube flip versus if you attend to another part, it'll flip back. And you can do kind of similar deliberate moves of attention with a lot of bistable images. So also, I think there's some really cool, there is some interesting kind of back to the kind of expectation thing and expecting seeing what you expect to see. A lot of that depends upon how strong your bottom up evidence is. If you are much more likely, so there are paradigms that kind of distract people, but basically condition them into expecting a stimulus to appear. And you can get to kind of accidental hallucinate or you can basically induce people to blur what is essentially a hallucination in those cases or mistakenly report something that we know as a fact, the experimentals, as experimentals they weren't there. I think there are two examples that what I have in mind at the moment is a really clever set of studies by Jean Aroub, I know Ryan and maybe you can talk about this, there's also a lot of work on conditioned hallucinations from a group at Yale who I know you collaborate with. Yeah, so yeah, and that was something I was actually gonna bring up is, and this is not, yeah, because I mean, I agree because in most cases I think it's gonna have to do with some kind of motivated selective attention that kind of like biases say different possible interpretations that you have posteriori over states for the posteriori to the first level, but yeah, definitely you can have these expectation effects, although I don't necessarily think of those as being voluntary, right? I'm not sure that like selective attention is definitely a kind of controllable cognitive action whereas shifting prior expectations I'm less confident that that can be passed that way, but priors definitely have an influence right on what your posteriori over states are given some stimulus, especially like Chris said when the stimulus strength is fairly low or noisy. So some of the work that, yeah, Al Powers and his group at Yale and then that I've been, we've been collaborating to build active inference models for the stuff is, yeah, these sorts of conditioned hallucination paradigms in people with psychosis. So the way these paradigms work is that you start out always showing people a light and that is always coincident with a tone that appears in some white noise. And so on the first several trials, there's a fairly strong tone that plays in the white noise every time they hear a light or every time they see a light and it's thresholded so that they would perceive that tone 75% of the time. And then after several of those trials, there starts to be cases where the tone is still present, but it's weaker. So they only would detect it 50 or 25% of the time. And then there are trials where the light comes on, but no tone is played and there's just light noise. And what they have found is that people with psychosis have a higher probability of reporting hearing tones when the light shows up, even when there was no tone. So it's just like kind of expect, so they build up this expectation to hear a tone and then we see the light and somehow those prior expectations that the tone will be there have a stronger influence over whether they hallucinate the tone or not, that influence is stronger in people that have hallucinations than in people that don't have hallucinations, which has kind of led to this idea that what's going, you know, part of the computational issue in people with psychosis that are more likely to have hallucinations is that prior expectations are in some way really precise and therefore gonna dominate perception. And that could be either because the prior expectations themselves are really precise or because beliefs about the actual sensory signal are, those are believed to be very imprecise. But in either case, the relative precision are supposed to, are thought to kind of favor prior expectations in psychosis to a greater degree than in people without psychosis. Thank you. Can I just follow up really quick? Yep. Yeah. So it's interesting that you brought the auditory thing into this, but the auditory hallucinations and I wonder if it's a visual cortex thing or if it's involved more with the auditory cortex because even when Chris was talking, you know, you think about like people have seen like when you have two things played simultaneously auditory track that like it's like listen for the phone number and then you hear the phone number, but then if you're not listening for the phone number, you're hearing something else. So it's interesting that the auditory system like the selective attention in the auditory track really is much more apparent, like where you see the blue dress versus the white dress or the gold dress or whatever color it was, like the two dresses is not so selective. It's like look for the blue dress or look for the white dress. If you set up that expectation in the visual system, it's not as prevalent, I think. Yeah, I mean, I will agree that it's definitely harder to switch the interpretation from the blue to the yellow when I've tried that. I can get myself to do it, but it's way more upper for you, right? And yeah, I mean, I don't know Chris, if you have any ideas about why exactly that would be, I mean, I know cases of like, attention and binocular rivalry and things like that. In those cases, I think there are cases where I think selective attention can play a bigger role like the Necker cube, for example, that's maybe a little more analogous to the kind of like multi-channel auditory input. I think it just depends on how you set the task up. I'm not sure if the comparison is really a fair one, I suppose. I think you can very, I think you definitely can set up a lot of expectations play a huge role in the visual system, I think is one thing to say. Whether you have voluntary control, whether those things come under the purview of like voluntary attention is another thing. But I still think there are lots of cases depending on how you set up the problems. I think binocular rivalry is a good one or even something like actually literally like where you move your eyes. I know that sounds trivial, but whether your eyes are moving around a page or not kind of determines some different types of visual illusions. So if you think about, maybe illusion isn't even the right word, but what's the, sorry, we're working on a project about this right now. What's it called again? Tom's part is a paper on this run. Do you remember it? What's troxylphating? Yeah, I was gonna say troxylphating, yeah. Yeah, if you Google troxylphating, I think that's a really powerful example of where just kind of selective attention has a really major role. Whether you perceive something as present or absent. Anyway. Yep, very interesting. It's almost like there's ambiguous and non-ambiguous stimuli in different domains and maybe even interpersonal variation, one person being really good at picking up on something subtle in one domain or not. So we'll do Steven, then Dean, then Adam. I think this ties in actually to what we were, I was mentioning earlier about when you scale things up and I agree with what you're saying, when you scale things up, it may be useful to think about how someone facilitates the attention of people in a group and how you take them and how you think about taking their attention. It doesn't mean that whatever they consciously have into subjectivity together is now read by the model. So it's usefulness may be in how people's attention is taken. And I think it ties to this hypnosis thing as well because I think your model speaks to top-down and bottom-up happen at different level or different scales because often we think top-down, we think executive function and like executive functions telling me what I think. But like you say, you can have a subconscious level of trying to think what something is, even if it's in the subconscious, you can, and hypnosis is doing that. Hypnosis is playing the game with someone. So they, or with these optical illusions, you say to someone, you're looking at this cube from above or below and it flips, but it's not that you're being told it's this type of cube and that your executive function is pushing it. It's still top-down, but it's happening at a more subconscious, what we might call a subconscious level. And I think that's quite interesting. So if I could just speak about that, I think there are a couple of things going on there. So I would talk about executive function. Well, I mean, I think it, one thing that this is a bit of a tri-point, but top-down is a very ambiguous term. I think what we're, so you can mean kind of, that can mean anything from basically like recurrent loops from higher courtesies that are higher up the cortical hierarchy to something like top-down, like a voluntary control of attention via executive, like frontal parietal networks or something like that. So those are just different phenomena. I think in our model, control of attention would presumably be at, if we were to build a model of this, would be at this, you would put the dorsal attention network at the second slower timescale of the model, I think. Maybe Ryan, you can, we can speak to this as well. I haven't really thought about this much. I would have been a little more like executive control network, sort of thing, I guess would have done a little bit more what I would have thought about just kind of off the top, but... Yeah, you're probably right about that. I mean, visual attention, right? I mean, obviously, right? The dorsal attention network is kind of defined or originally, yeah, like when specifically to, yeah, like visual attention and things like that or kind of quote-unquote, top-down visual attention, but in terms of the kind of directing, right? Visual attention in a bull-directed way. And I think that would be more executive control network for the dorsal exposure. I mean, on the hallucination point, I did my undergrad at Macquarie University and they have a whole hallucination group there within the cognitive science department. And so I just feel like acutely aware of the fact that I don't know anything about it. And it's also an extremely complicated phenomenon. You can use hallucination to like as a model of various delusions. Like you can induce, is it called like mirror, basically like mirror agnosia where you don't no longer recognize yourself, mirror self-agnosia or something where you no longer recognize yourself in a mirror. Like it's a really powerful tool, but I have absolutely no idea how it works. And so I would be very kind of, I think it's very tempting as kind of modelers and theorists to kind of go into various experimentalist domains, like, hey, here's how my overarching framework explains your thing. And I kind of wanna resist that. I know a little bit about visual consciousness and how expectation, all that stuff works there, but hallucination is such a hairy topic. I just don't know. Yeah, I mean, I would also say, I mean, in the kind of, there's a few different things have been brought up here, right? But I mean, in terms of, it's not clear, for example, but the kind of auditory hallucination story that I was talking about earlier with, in terms of having relatively, more precise prior expectations, creating a stronger tendency for auditory hallucinations. And it's not clear at all that kind of story mechanistically is generalizes to other domains. So, I mean, actually in the individual domain, there's this whole other body of work where it looks like people that have delusions, again, as part of the sort of, as part of psychosis, right? Actually, show the opposite weighting in the visual system, right? So where they actually assign too much precision to low-level bottom-up visual signals. So they end up, as these interesting cases where they end up, if you look at eye tracking, they end up doing people with delusions, end up actually, or psychosis, gets bring up, end up doing better at certain tasks than healthy people, but worse at others. So one example of this is if you have people just kind of try to follow this visual tracking, follow a little kind of moving target. Then if that target switches directions really quickly, then the people with psychosis end up actually following it better than healthy people. Because healthy people kind of continue to follow it for a little bit after the turn in the kind of predicted direction, right? So prior expectations are kind of playing a stronger role. So they actually do a little better. Whereas if instead you have something where the target temporarily kind of moves behind some kind of barrier, then healthy people do better. Whereas people with schizophrenia kind of lose it because prior expectations don't do enough to keep them kind of following what the trajectory would have been, right? As it goes behind the visual occlusion. And there's lots of other cases, for instance, in like the proprioceptive domain where people talk about the same kind of thing where people with schizophrenia actually do better with things like force matching illusions and stuff like force matching tasks. And I mean, this more to the story there, but people have tried to link that to like delusions about agency and things like that. So just general point being that the way that kind of computational mechanism is, you know, looks like it happens in one, say, sensory domain can still operate very differently in the other sensory domain even in the same disorder, right? In the context of the same like psychosis. So these things don't necessarily, they shouldn't be expected to generalize across domains is one point. The other point was, you know, in relation to, you know, things like hypnosis, which I don't know a lot about as part of a textbook chapter. I was asked to write a paragraph or two on hypnosis. So I kind of looked into that water for a little bit. And it's actually pretty messy. There are very different, there are multiple different theories about hypnosis work. And I mean, one of them, you know, can relate to this idea that, you know, if the hypnotist repeatedly like gives suggestions that are set up such that they will be confirmed, right? So it's like, hold out your, the hypnotist says, hold out your arms. And then he says, your arms are gonna start to feel heavy and then they'll start to fall, right? Obviously anybody who holds out their arms, they'll eventually start to feel heavy and they're, you know, heavier and they'll start to drop just as a result of your muscles, your shoulder muscles getting tired, right? So if you do a number of those things and the kind of things the hypnotist, the predictions made by the hypnotist are confirmed, then, you know, it could be that you're just coming to assign a really high reliability to whatever the hypnotist says, right? So in that case, if the hypnotist just comes to be assigned really high precision, then you can just prior expectations in a way that just kind of dominate or can end up having a much stronger influence on what you do. And that's kind of one, you know, one idea. But again, there's several other theories of the hypnotist I just have to do with kind of like conforming to kind of social roles and things like that that you're told to, that you're kind of told to do. And I mean, anyway, so I wouldn't necessarily feel that confident saying much about about how this relates to hypnosis other than that in a broad sense, it can involve manipulating sort of tricky ways of manipulating prior expectations in other people. But again, I mean, even that is pretty speculative. I mean, so if I could just quickly throw something in another complicating factor. So even within the visual system, having a strong prior in one domain or doesn't generalize to others, there's this really great paper that just came out, I think last year in cognition where they looked at representational momentum, moony images, and goodness, I think there was one other, I think it was illusory contours and they had one other top down expectation kind of paradigm. And there were large individual differences between all of them. And there were also large individual differences in kind of the effect of prior expectation across those different tasks. So I think there's this kind of a lot of complicating factor like moving parts and a lot of this stuff that we have to deal with when we talk about these things. Okay, we're gonna return to the stack and go to Dean and then a question from the chat and then Adam. So I'll go back and be ambiguous. So when I was looking at your diagrams and one of the things that really, I wanted to ask this question, but we ran out of time last week. Do you think we can say that if a person says, I see this or I do not see this, the or, yes or no? Can we apply an idea that there's a certain amount of acceptance and a certain amount of letting go? And I kind of bundled that under we reconcile or is it something completely different? I mean, how do you look at this now when a person's actually looking at something and going, okay, I do not see the box. Is it just, is it that simple? That's a simple yes, no, or do you think there's something more built into it? So there is a whole college industry. I feel like college industry might even be an understatement on how to collect verbal reports and the proper ways of doing it. So there's like a perceptual awareness scale where you in some sense have a description of the content. So you have, I didn't see anything. I maybe saw something, but there was no specific content. Like you have a vague feeling. You saw something, but it was a bit blurry. And then you had like a full vivid experience. That's one way you can have people place bets. That's another way. You can have people do confidence ratings. Lately there's been some incredibly clever work basically looking at. So just to give a shout to people who I, who's a worker I really admire like Haka Wanlau and Megan Peters have a really great paper in E-Life where they basically look show that people seem to have optimal in the sense of like Bay's optimal introspective access to their own performance. So basically when they are, they will, people never behave correctly without kind of knowing that they behaved correctly as it were. You do these like very tightly controlled experimental situations. So yeah, I think maybe I was a bit flippant before when I said you have to take reports as foundational. I stand by that statement, but there is like years and years and years of like careful psychophysics that have gone into being like, yeah, these things are reliable. And basically it comes down to, I wouldn't believe some, if someone said, yeah, I see something, but they were getting like 51, like 50% on a forced choice task about the orientation of a Gabor patch, but they were claiming they would see it. I just wouldn't believe them. Generally speaking, you would expect if you see something, your performance should also go up. That is not to say, however, that you can't have increased performance without awareness. There is some convincing evidence showing those two things to associate and that's kind of what we were trying to show here. So hopefully that answers the question. Yeah, it does. Nice. Yeah, I noticed on this extended taxonomy, the bottom right, it's like consistent prior, high attention, strong signal, it's the perfect storm, and then it's 100% seen, 100% correct. And then again, kind of taking it loosely mapping to our experiences with other attention phenomena. Like if you're sure about seeing, and of course the way that you tuned prior or inconsistent prior, these are variables that are in the model. So people can play around. This isn't like a specific yes, no, but just examples of how rich the phase space is in terms of what the model can do and what it can capture. So here's the question from the chat and then it will go to Adam. So the question from the chat is, any thought about the possibility of using this model to investigate lucid dreaming related phenomena like the failure of implicit metacognition while dreaming? Thank you. So how about dreaming or lucid dreaming? So that stuff is super interesting. I mean, I don't think this model captures that at all, except in like a computational sense. Like I don't know what task would be modeling specifically but like we can, if we're being a bit metaphorical about it. So presumably, I don't know anything about lucid dreaming. I presume it's a case where basically you are dreaming. You're not just asleep. You have kind of a experience that goes along with it and also you are aware of the fact that you are dreaming. That's really interesting. I think it would have to come back to the, I don't know how one would capture metacognition in this framework. I think that's one thing to say. I don't know what it would be. Maybe Ryan, you have some thoughts on this, but it would relate to basically how one thinks about metacognition in these deep temporal models. Yeah. I mean, metacognition in and of itself comes in a bunch of different flavors, right? I mean, so there's kind of beliefs about how good your memory is, for example, right? And be kind of metacognition. It's like beliefs about your beliefs or beliefs about patterns in your attention or there's a bunch of different types of metacognition, right? It's just sort of beliefs or operations over representations of your own cognitive processes. So I mean, the tricky part is that you kind of have to have a higher level that's treating what you chose to do in terms of a voluntary cognitive process that observes that and then infers something about it. And, you know, the way that that ends up influencing the cognitive operation level below, right? The way that the higher metacognitive level ends up setting priors on the lower level. I'm not sure the right way to think about that. I mean, you'd have to look at literature on the way metacognition affects, you know, performance and cognitive tasks, which I'm not, I'm definitely not a lecturer. One thing to say just very briefly would just be in the discussion section of paper we discussed alternative models. I think probably the model that's closest to ours in the literature would be Steve Fleming's higher or the state space model. And I think there he explicitly discusses ignition related phenomena and kind of metacognitive inferences about the presence or absence of stimuli. So I would say if you're interested in that kind of stuff and how to think about an amazing framework, look there, it's a really great paper. Yeah, I mean, another one, I mean, that comes to mind, I know is that there's a paper by, where the first author is class Stefan from 2016, where he talks about the way you can apply active inference models to model homeostatic and allostatic control processes. So basically like anticipating, for example, that your temperature is gonna drop and therefore increasing temperature body temperature a little in advance so that when it subsequently drops you stay in a within a kind of survival homeostatic range. I'm just using temperature here as an arbitrary example of any kind of variable internal bodily variable or glucose levels or hormone levels or anything like that. But they do talk about this level, a metacognitive level that you can add above the allostatic regulation level that more or less tracks the efficacy or the kind of success of the allostatic level and how if the allostatic level repeatedly fails to keep homeostatic variable or keep bodily variables within homeostatic ranges then the higher level can basically infer that allostasis will fail. And they show how that can plausibly lead to certain sorts of symptoms of depression and you have low beliefs about the efficacy of your own allostatic processes. So I mean, that's a paper that I'd recommend looking at for one example of the way that you might apply one type of metacognitive on top of the vision. I also just remember the paper came out yesterday. I shared it, I sorry, I forgot about this. I just shared it on Twitter yesterday if you want to look. It came out of Mike Allen's group at August. I've gotten the first author's name, I'm really sorry about that. But they explicitly discuss like predictive processing and active inference theories of consciousness in relation to introception and metacognition and they have a whole discussion of this type of thing. So if you're interested, maybe just like check my Twitter page or at Mike Allen's Twitter page. Sounds good. We'll go to Adam and then anyone else who raises their hand. Hi, perhaps tying a few of these things together. I was wondering if like respect to hallucinations and psychosis and forms of delusions, if maybe some of it you could potentially model it in terms of not having access to mental acts, so which might have like associated efforts, copies that could potentially like be adjusting precision and are also having different components of the associate mental act, which you may or may not attribute to you like some, so like aspects of interceptive inference that could lead to senses of like willing or ownership, but then by not having access this potentially, where the access itself of the action generation could basically adjust what you're, how you're attributing what sense you're making of the associated perceptual events. So it's like, how would I say this? If you're talking to yourself in thought or something like thought is in your speech like Vygotsky, we'll talk about like, well if you don't have access to like the generation part for some reason, I don't know how much to keep them be separated, but then like the voice might just like appear to you in a way where you don't have a sense of ownership, if there's not like the right type of like maybe in interceptive coupling to the generation process or just like a lack of metacognition to contextualize it. Yeah, I really like this idea. I mean, one thing to keep in mind here is, is the sort of thing that you're talking about when people have sort of different experiences of or where there are different contents and experience. I mean, basically that's what you're talking about, right? Different contents that can be inexperienced. Those are just in a model like this going to amount to different sorts of state spaces that you can infer posterior is over, right? So, you know, you can have beliefs about how much ownership you have over your body or you can have beliefs about something going on in your body, inter-acceptively. And all those things are just going to be, you're going to have some sort of afferent signal and you're going to have some prior expectations, obviously, and then you're going to infer some posterior over whatever the contents are of the thing in question, right? So anything along the lines of what is the, what is the content of one thing versus another is just going to amount to a particular level in a model that infers posteriors over particular contents, right? So none of that is going to have to do directly with our model other than the fact that given posteriors over any content, you know, if our model were applicable to that, you could just treat those contents as the lower level, right? So you'd have inference over some content and then our model would say something about the processes that make that content conscious or not, right? Or consciously accessible or not. So the only thing that our model might be able to say is if you swap out the, you know, square versus lines in our model, right? Whatever the beliefs are at the lower level, if you swap that out with representations of some other content, whatever it is you're interested in, you know, something about the body or something about what, you know, what patterns, what sequence of speech, right? That you're hearing or anything like that, right? What word you just heard, you know, swap that out as the contents of the lower level and our model might be able to say something about the processes that determine whether you, whether those contents are accessible or not. But beyond that, our model doesn't have much to say. Yeah, I mean, if I could say something on that as well. I really like the idea. I think just along similar lines to Ryan, because it just casts like decisions about ownership as just having access to some content or another. And so for me, like the essence of any type of conscious access or awareness process is basically whether it's integrated into this temporally deep representation and in doing so, available to all of these sub-processes our global workspace theory. And so I think this kind of relating this to kind of the metacognitive point. I think this is actually a spot where maybe this is a difference between our theory versus an active inference theory that gave an explicit role for metacognition. So for me, it's just the fact that you can be aware that it is, you're aware of it because it's integrated into this representation, right? Whereas maybe a higher order person be, no, you have to also infer that you are seeing it or that I am seeing it or something like that. And so I think there's actually like really interesting empirical questions that you'll, what I'm gonna say. But I hope that we can kind of, I don't know, maybe it would be nice to see a discussion of kind of active inference theories of visual awareness of metacognition versus active inference theories of like visual awareness in general. Anyway, I know if you, I know you've written a lot about kind of different theories of consciousness, Adam. So I don't know if you have any thoughts about that. I mean, I know that there's this preprint by the first author is Smith Lars. Well, I can't remember his first name, but his last name is Smith, that does propose a type of model of metacognition. And there's a tension works a little differently than in ours, but essentially you do have a second level that controls a precision parameter on the first level. And then there's a third level in there that infers things about control of the control of the temperature parameter at first level, if I'm remembering right. And they use it as a way of, the way they talk about it as a way to think of like meditation and mindfulness and things like that. If I'm not, you know, I don't remember it in a detail to say a lot about my specific feelings on that, but that is an example that I'm aware of. Again, it's just the preprint at this point, but of another attempt to try to model something like metacognition. Cool, interesting questions. Adam, did you want to say something else? Yep, go ahead. Yeah, so I basically want to throw your model at everything. But in terms of a metacognition and what I've worked on in the past, mostly I've been focusing on trying to cross-reference various theories to see the points of overlap and non-overlap, mostly considering accounts of dynamic modularity that might apply to a global neural workspace architecture or like integrate information theory where you can think of their complexes of integration, those modules as themselves being like, do an IIT hammering of those and saying like, where's the overlap? Basically trying to give Samantha content to IIT via like GNWT is like an architecture for Bayesian model selection. And the whole thing kind of being adjudicated by free energy principle and mostly an active inference, not nearly the technical depth that you have. I'm only beginning to move into things like metacognition and like higher order forms of consciousness. And the ideas are like really speculative. Like so like one idea is that like as you're like, let's say imagining like enacting something, you might start out from like generating like experience from an egocentric perspective, but then this enactment with auto-associatively also have third-person allocentric representations from like the ventral visual stream or something like this, where like you've seen people or yourself doing similar things. And so you might then like be able to get access to in the mind's eye, this like third-person point of view on you with some unfolding, like some sort of like moving back and forth between a first person, like I said in the models, I'm working with like phenomenal consciousnesses and accessible consciousness. There's a sense in which like in a Rudroff projected geometric modeling, it's always from this first person egocentric point of view, but then the idea is you then are looking at and objectified you and this would be part of metacognitive awareness and contextualizing you in terms of like you doing actions and then this third-person little homunculus doing actions that you're seeing in your mind's eye and moving back and forth. That's like the basic space I'm moving into and I don't know how much of that's gonna work or how you would model that. I mean like one thing just kind of... Kind of, I mean, again, just to, I mean, when you think about this kind of informal terms, right then, then if you're in some sense, if you're, the content you're describing as a belief that you are doing something, right? So that's just the content in the space, right? So same kind of thing, right? Swap at our lower level with a state space that is beliefs about what I am doing, right? Then it's just the same thing, right? You have a representation about what you are doing and in our model, you would either, you would either gain access to the representation of that content or not via its integration with this deeper temporal level granted beliefs about what I am doing or not is already, right? Pretty, pretty temporally deep, but were those sorts of contents to apply to our model and that's what it would say. It would say that those contents need to be integrated with this deeper temporal model but is able to form posteriors over something that incorporates that. And so again, I mean, at the end of the day, really, it really just comes down to this distinction about the content being represented and whether or not the posteriors associated with that content get integrated with the higher level that the sorts of things that the higher level described here can do. So that's just a question about whether something like that is the content that you want to be using in a goal directory or reporting or whether just representations of the lower level kind of visual content, right? And that being integrated with this level is what's involved. So it's me again, this just has to do with different levels of representation and what the contents are that you're becoming conscious of. So, oh, and by the way, I just to mention, I looked up the paper I was mentioning. So it's called towards the formal neuro phenomenology of medic cognition and I'm the first author is Lars Sandbad Smith, just to do justice to remembering who the first author's full name was and the name of the paper I was talking about. So, sorry, I think you started designing something at the same time, Chris. Yeah, Christopher, then Adam. Oh, I was gonna change the topic very slightly, but I was wondering, so how do you think about phenomenology or phenomenal consciousness? To me, it comes down to the nature of the representation that you're gaining access to. So to talk about, you talk about the projective consciousness model, which is really cool work. There, this plus perspectival nature of consciousness comes from the structure of the generative model and the fact that they're using this projective geometry and that's built into the structure of the generative model. I'm just curious about how you think about this and how that kind of maps on IIT because at least as I understand IIT, like, phenomenal consciousness is kind of a... I've never really been clear how the content of consciousness fits into IIT for me. Like, it's all, ah, man. Anyway, yeah, I'm just a bit interested to hear your thoughts and how it all fits together with, like, global workspace theory. Well, like, yeah, I would say there's some kind of giant beg questions in IIT. I think, though, it's not nothing that they start from axioms of what they think some part of phenomenology and then work their way to this sort of way of handling systems. In terms of, I think it's a decent prior for, like, what we should look for in, like, physical substrates. I wouldn't, like, completely, like, throw it off, but they then would say, if you have these axioms that characterize experience, intrinsic existence, composition, information, integration, exclusion, if you have these different properties, then it is sufficient to bring about phenomenality because they started from phenomenality. Oh, no, I don't think that follows at all. That being said, I think it does potentially, like, give us some priors in trying to think about physical and computational substrates of consciousness. So that would be, like, the cross-referencing. And then the other part would be, like, how do you think, so if you're thinking of global neuronal workspace as, like, potentially, like, a trading off of modularity as being part of the physical implementation, well, then the question is, like, so when do you have, like, larger, big modules functioning as, like, dynamic cores or workspaces and when do you have, like, more fragmented, like, local processing? So, like, in theory, you could get, like, Bayesian model selection with, like, discrete updating with, like, a bunch of, like, small beta complexes close to the modalities or it could be, like, this big, sprawling alpha complex that's, like, going, that's multimodal in all these different ways. But then, like, you're moving back and forth between these, like, degrees of synchrony. It seems like I.T. could potentially be useful there for describing that. So that was one of the ideas of that in a relation. In terms of, like, kind of bringing it to Rudroff, like, for me, like, the object I would want, that would be, like, potentially a minimal condition for phenomenality, would be some sort of joint distribution over your body pose and visual spatial awareness as, like, the minimal thing for me, at least for human-like consciousness and, like, a series of, basically, roughly at alpha frequencies, a series of estimates of that. And then, thinking of this as iterative Bayesian model selection via a global workspace architecture, but not necessarily one where there's, like, access. That's, like, a more sophisticated kind of consciousness in terms of, you wouldn't necessarily, like, I would call it, like, it's pretty darn global. You have something, like, spanning all the posterior cortex, but it's not the kind of workspace you're dealing with where you're actually having knowledge and access. Thanks, Adam. So if anyone else wants to, they can raise their hand. Otherwise, I think there was a very good request. Just, you did just ask them, so let's mix it up. But there was a very good request, actually, if Ryan or Christopher, you could just look at this part of the figure, which I know is a structure we walk through in the model stream as well, and just kind of map some of these bigger ideas that we've been talking about to some of the letters, just in a way where people can now look at this outline and say, okay, the experiment, I'm kind of on the page, we talked about a bunch of visual and interceptive kinds of experiments, just where did it map in this paper so that people will always be able to go back to this paper, look at the code, and then kind of map some of these bigger metaphors to the experiment that you did here. Do you want to take this, Ryan, or do you want to? I need it, either way, it doesn't matter to me. Go for it. So, how detailed would you like me to go? Would you like me to explain what the little square, what the D's and A's and B's and S's are, and all that stuff? Let's go for all the letters on the right side, and maybe even some on the left. All right, so this is a partially observable Markov decision process, and it's a hierarchical one. So, let's just focus on, essentially, let's start at the bottom. O's are observations, and S's are hidden or latent states, and what the A matrix does is it provides a mapping between those two things. So, the circles are random variables, and the squares are essentially functions, in this case, probability distributions, that map between those two things. So, the A and the arrow going down just says that that's essentially with that little A superimposed on arrow is basically saying that's expressing a conditional probability distribution, saying that O, observations, depends upon the hidden state. Now, as we start to evolve through time, we need transitions between discrete hidden latent states in the world, and those are what's described by B. So, this is something like, what is the probability that at time one, what's the probability of each hidden state at the next time step? And it will describe kind of S at T, conditioned on S at T minus one. Now, then what's special about hierarchical models is that you have another layer on top of that, where hidden states at the first level are now being treated as observations by the second level. And so, the posterior probability at the first level is basically acts as an observation for the second level A matrix. And then at the second level, well, there should be a D at the first level too, but that's fine. Basically, what this prior, what this second level hidden state does is it provides a prior over that first levels. And then at the first level, you can then have basically, we just have one time step at the first level, but you can have any amount of time steps as you want. So, if you imagine that the metaphor that's often used is like the minute versus the hour clock on the hand, they're evolving at different time scales. So, you could in this diagram, we show that the second level evolves at two time steps for every one time step at the first level, at the second level, sorry. At the second level, there are again, transitions between these latent states, and that's also where we have policy selections. That's why we put it up there. So, policy selection is essentially, if you imagine a whole series of hidden Markov models, this is my favorite way of describing policy selection. So, these hidden Markov models are basically just a partial observable Markov decision process without the decision aspect. If you imagine a whole series of basically these graphs, and you are deciding what graph kind of is gonna be my future, you are trying to find the graph with the highest model evidence. That is what computing, doing policy selection does. So, you are choosing the transitions that will take you to basically the graphs with the highest model evidence. I know that Brian, you wanna kind of clean up some of that explanation? Throw out the shoulders. Let's go to the last few variables. Yeah, so, I mean, you didn't do G and C and E, right? Yeah, sorry. Okay, right, so pie, and just like Chris said, right, it's just your distribution over policies. G is the expected free energy. So, that's essentially the function that decides, right? Which, what the value is, you can think about it about each policy. And that is with respect to C, which is your preference distribution, which is essentially the thing that specifies which observations you want and which observations you don't. Now, somehow I see that an E has been placed with an arrow down to G, and that's not correct. E should be going straight to pie. If you're gonna include E, I don't think we did anything with E in our model. So, I don't think E needs to be there. But E, if it's used, is just a kind of separate prior over policies that can encode something like habits. So, it competes with the expected free energy over policies to infer what to, when you're inferring what the posterior over policies is. So, yeah, I mean, otherwise, I feel like Chris described it well. I mean, you know, it'd be a little, it's always a little easier if you have like a, you know, if you're in control of the pointers, you can actually point to the things that you're talking about. I think you also asked us to relate this to an experimental paradigm. So, if we maybe briefly click down to figure two, what you've labeled as figure two there. So, we can just look at that. So, the idea here is that there was a forwards mask. At the beginning of the paradigm, this was just a series of lines, and there were these kind of stimuli on the outside. At the second time step in the task, that we're now like simulated task, they, on some trials, the square kind of was self-organized, but some of the lines kind of self-organized into a square. So, either in our discrete state space model, a square was present or it wasn't. And in the third time step, it was just replaced by more lines. And then afterwards, we asked the agent to construct a report of that. So, basically at, you can think of the first level of the model as being all of the features of the stimulus and also the location of attention. So, this might be something like saliency maps in posterior parietal cortex, which are directing wherever attention's being pointed to. You might have, and then you might have the features of the stimulus which are represented in various places. At the second level, we now have our, this is basically our frontal parietal cortices. This is tracking the evolution of states at the first level. So, it's tracking that at the first time step, there was lines presented at the second time step, there was a square presented, maybe it maybe it wasn't, maybe it was presented, maybe it wasn't. And at the third time step, there was more lines. And you can imagine in like a real task, all of that unfolds in a space of about a second. Yeah, so you can think about it, I mean, like, yeah, the second level is really the content and question that it's inferring is the sequence, right? It's not whether it's a square, it's whether it was lines, square lines, or just lines, lines, lines. And so, I know it's still, so I go home. Oh, and so if you're thinking about where experience lives, obviously we don't experience a sequence, right? I think, and I think Ryan and I have been chatting about this a lot lately because it relates to work we're currently doing, but I think what's going on is that what we experience are essentially like the updates to our second level beliefs. And that's really crucial. So, the reason why it's crucial is because we know empirically that that kind of what could be, we might describe as brain processes, they're at the first level of the model. These can be either conscious or they're or not. And contents that are conscious, when you are conscious of something, it has all of these really important functional consequences. So here it might be kind of being able to see it for report, maintain it for report, but also if I see something, I can then voluntarily maintain it in memory for as long as I want, as long as I don't get distracted, like there are limits on like human psychology. But that's kind of a really important function that's enabled by being conscious of something, right? And that allows us to do things like construct reports of our experience. Or even I think by report, I just mean something very general. It could be in this model, we kind of have the toy thing of being able to like put a sentence together, but it could also be more empirically real, in terms of empirical paradigm, or just be like hitting a button or doing a confidence tar, doing a confidence interval or something like that. Cool, thank you for that. Interesting answer. We'll have Steven and then Adam, and then anyone else who wants to raise their hand and also any last questions from the live chat as we kind of slowly land the plane. Yeah, I was just gonna ask, in relation to those lines and the way that the square appears, like you said, so this is actually like a graphic representation, but when someone looks at it, there's probably some special device that you've got that represents this. Is it like that the lines rotate to be, and at some point it just happens, they rotate such that they line up and then they un-rotate or, and just one other question is, why didn't you use negative space? Like say they line up to make, you know, like a negative space square. I just wondered whether that relates to the choice of how it pops in and out of perceptual, sorry, phenomenological awareness, phenomenological consciousness or access consciousness. So basically this is just a very, in our model, we have discrete hidden states, which are categorical distributions, which are lines or squares, and external states, which are red. And so this is kind of just a graphical representation of a way to think about that, but it's really nothing to do with how it works mathematically. Then in terms of actually the empirical task, like this is a task that we basically like borrowed from Michael Pitts who's really done a lot, he's a cognitive neuroscientist who's done a lot of really important work in the area of visual consciousness. And as to why they chose to kind of have these, I guess the, I think from memory, there are videos attached to this paper. From memory, I think the lines is kind of, they jitter around, and then occasionally a subset of them will all line up into a square and then go back to like jittering. As to why they did that versus other methods, like, I don't know, really, there definitely are subtle things you can do to a stimulus, which have major impacts on whether you have conscious access to it or not, but the phenomenon of the inattentional blindness is super general. Like these are, you can have people. So the really famous example is that everyone gets shown in like undergrad psych is the, you have your tracking, whether a group of people playing basketball, and you ask how many times does someone with like a white shirt get passed a ball or something like that. And your attention is distracted and so kind of consumed with that, that a bear or a gorilla can like walk into the middle of the screen, being the dead center of your phobia and wave at you and then walk away. And then about 50% of people don't see it at all. And that's really well replicated across a bunch of, or they don't report seeing it, I think I should say. I think there's some debate about whether this is a memory phenomenon or any of that. There's lots of subtleties there, but so hopefully that answers the question. Yeah, also one advantage and one rationale for these pure modeling papers that we're discussing today is the space of the possible human experiments or even any other kind of real world experiments is very limited. It's very limited what you can actually have a real set of humans do. And so it's great to have tools that help us explore some of the patterns that we're looking for and understand how variability and do statistical calculations, like how many participants might meet, we need of a given variability range if we wanna capture such and such an effect without distorting it this way. That's critical information. And if you don't have the model, then the experimental design phase is totally blind and it's a very shot in the dark. And so this helps inform structures for even thinking about how to design human experiments. For example, as we're talking about here with visual awareness, but it could be for other kinds of experiments. So Adam and then anyone else who wants to make a comment. See you later, Alex, bye. Adam. Hi, I'd be curious in knowing more of your thoughts on the potential, I guess, richness or lack thereof of experience. So like the one point, could consciousness be phenomenal or access? More experience as a series of snapshots that are like static sequential, but we don't know it like a flip book or is it more like a continuous stream? Like if you're getting this discreet updating, are these updates over like the agent in motion? Like if you take like a camera sometimes in your phone, there's like a little brief like forward ahead, like look ahead and back, but like what would be the nature of these discreet updates? Hi, Ellen. Yeah, hi. I mean, I know the, like I'm trying to remember, I know, and I remember reading Standa Hahn's book a few years ago. And there is some work I think showing, there are kind of, there's like a minimal time for like essentially updates to the content of consciousness to have some kind of discreet character to them. I don't remember what the actual- Psychological refractory period. Yeah, do you know what that, do you remember the actual like milliseconds? I can't remember, like 50 milliseconds. Yeah, I don't remember off hand, but there is something like a, that maybe might be similar to like a refresh rate or something like that. But yeah, I don't remember too much about that off hand. But I mean, one thing to say is that, because there are these sorts of discreet updates in our model, but I mean, it's also good to remember that the higher level representation is specifically about this sequence, right? Which is integrated in a sense that the hypothesis is about a whole sequence that doesn't necessarily need to be thought about as having this sort of discreet chunks to it, right? It's just this single hypothesis about what the whole sequence was. And so you might think about that, right? As having a more kind of continuous character to it, despite the fact that it is updated in some ultimately discreet fashion, you know, in kind of these really fast bins. So again, maybe we could think about it that way. But in terms of richness, I mean, to me, I mean, I just think of, I just tend to think of rich as meaning there are more precise features, right? There's more features that are more precise at the lower level, right? So instead of just, you know, a lines or a square, right? It are, you know, in our example, there could be a ton of different lower level representations about color and shape and size and, you know, all these sorts of things that are represented in some sort of joint way, right? And become accessible. And so I mean, that's one way to think about richness is just what are all the things being represented that become accessible together. But I don't know, they used to do something else. Can I ask you a question actually there about the continuous modeling? Is it possible that active inference could have a continuous mode? I know that it would get rid of a lot of the discreet benefits and the sequential message passing or maybe other heuristics, but could there be a continuous format, continuous time or space? Yeah, there are continuous state space models and there are also mixed models, you know, mixed models are kind of especially nice and probably a lot more realistic here, you know, because visual input, a lot of the things that are represented by the visual system are continuous, right? Like motion, for example, is continuous, brightness, you know, all these sorts of things. And so those can be perceived and represented in a continuous scale at the lower level but still get passed up to disparate representation at the higher level. We didn't do that here for simplicity, but you put it. So I was just to clarify, I was in scaling active inference, we did talk about the continuous state space. I was just wondering if it's possible to have a continuous time active inference model rather than a discreet time model of T1, 2, 3, just bringing it up. Okay, but then Christopher and then Blue. So yeah, three things. So just quickly to answer your question, Daniel. Hidden Markov model is just like a super general thing. It's basically just when you have a Markov chain and that you have, but each state of the Markov chain links to some outcome. Those outcomes can then both the outcomes and the latent states can be continuous. When you have a discreet latent state continuous outcome, you end up with a Gaussian mixed, with like basically what ends up being a mixture of Gaussians model. And when you have, if you have continuous states, you end up with, I think, you basically end up with a Kalman filter. And you can stack those on top of each other and do all the same things. It's just, yeah, for all the reasons you say, things get really complicated when you start to move to continuous time. But I completely agree with Ryan. I actually, I think there are computation, there are functional reasons to think that decision-making is discreet. And so at a decision-making, the level of decision-making computationally speaking, we should use discrete state-based models. I also think that at a certain level, we need continuous state spaces. Obviously we represent continuous quantities. And I think Thomas Parr, we've chatted about this before in the model stream, but Thomas Parr has a really nice paper on kind of the discreet continuous interface. And that's something I'm actually really interested in. I'm not sure. I don't know to what extent we should take these things as idealizations or as general kind of neurophysiological predictions. I think that's a really, I'm super interested in that in other words, but I don't know. Another thing, and I would like to say those is that it's not as though only, only action, you know, only action selection has to do with discrete spaces. There are possibly a lot of higher level representational things that are also discrete. So for instance, like concepts, right? Like, I can have a bunch of continuously represented lower level of visual features, but I can use those to infer something discreet as well. Like for instance, that those features forced onto my concept of a dog, right? Or my concept of a binning animal, you know, whatever, right? We do have these discrete categories that we map on patterns of continuous features, and I mean, that's a kind of whole nother discussion. I think we talked about briefly last time, right? Gaining conscious access to the visual features versus gaining conscious access to the fact that it is a dog, right? So those are different levels of representation that can be attended to, and you know, may be able to kind of independently or semi independently be accessible separately. So that's, so I just wanna point out that not everything is gonna be continuous in in perception when sort of passed properly. And that's where, I was just gonna say that's where a lot of category theory discussion comes into play. Let's do blue, then Christopher for final thoughts. So I know like we operate under the assumption that the visual, like we perceive things continuously in visual space, but I mean, really the input is like 60 hertz, right? So I mean, theoretically like this thing, this object is consistently a dog, but if it's like flashing back and forth to a cat at sub perceptible levels, like we wouldn't know, right? So I mean, I think anything can be modeled in discrete chunks if you break the chunks small enough pieces. Yeah, yeah, I've sometimes seen people like make be a bit smiley about this when they, someone says like, oh, discrete state space is unrealistic and the comeback is always like, mate, like you can always discretize things up to some arbitrary level and then discrete state spaces work fine. I don't know, I actually think that's a bit of a dodge to be honest, because the way we're using these discrete state space models is in like an ultra discreet kind of way, right? So I think that if we had like, if we had a hidden Markov model where we had like 60 hertz equivalent or something, that would just be ridiculous. Like you should just work and continue at state spaces at that point. Like the computational and conceptual advantages of working in a continuum of this great state space are gone. It's so much, it would be so difficult. It's a really nice question though. And it's like, if you do make it a finer and finer granularity, you keep some of the really big benefits of splitting. And sometimes it's actually easier to even do like a protein folding. They'll do a time step of the tiniest, tiniest amount and do millions of time steps are tiny because it's still easier to fit that ultra rich discrete time model with actual time steps that can be clustered on different computers rather than rewrite the whole base to do continuous modeling. So I agree. It's not like continuous is simply better. And it really relates deeply to how we think about the continuum and the infinitesimal. And so it's really an interesting area for active. So Steven, and then any other closing comments? You know, I suppose if we also were to go back into the physics of it, you know, with active inference as a way of like if vision is being the surfaces out there rather than it being an input signal, even if we have these brainwaves, some of that might be an artifact of cognitive science which has got like the input process output. At some level it's extracting like Alex Constant talks about extracting, you know, information from quantum noise, noise, you know, random fluctuations and all this sort of stuff. So, you know, it could be that discrete state space is kind of, you know, sort of also because there is quite choppy what's coming in. And at some point inferences have to be made that makes it more like a signal, if that makes sense. Now, I'm just putting that out there just to that as another layer at the lower levels of the retina and stuff like that. Sounds good. Any last comments? Otherwise this was super interesting. I guess a closing question for the authors would just be when's the next episode in this paper saga or what's the next? What's the next thing you're excited about here? My process, we have another paper that's kind of, I don't know, maybe like three quarters down or something like that. It's kind of the next episode of this. It's different. All the simulations. Yeah, so they're done, it's an expanded model that's does a lot more with, so it allows for a no report paradigm and selective attention and working memory maintenance are explicit policies in the updated one and there's a number of other advantages, but it's coming soon. Yeah, I think it's a matter of me writing it, really. Wow, too real. But thanks everyone for joining. This was really fun for 18 overall and we look forward to probably seeing you again for 19 and beyond. Thanks to all the participants, you can fill out the survey for feedback in your events calendar invitation, otherwise we'll be talking through other channels. So thanks everyone, see you later. Thanks.