 Hello, everyone. Welcome to Acton Flab live stream number 46.2. It's June 22, 2022. Welcome to the Acton Flab. We're a participatory online lab that is communicating, learning and practicing applied active inference. You can find us at the links on the slide. This is a recorded and an archived live stream. So please provide us with feedback so we can improve our work. All backgrounds and perspectives are welcome, and we'll follow video etiquette for live streams. Head over to activeinference.org to learn about participating in live streams or in the many other Acton Flab projects. We're here in our third discussion on the paper. Active inference models do not contradict folk psychology by Smith, Ramstead and Kiefer. We had a .0 with Ryan Joining and Jakub and Dean where we talked through some of the background of this paper. And last week in the .1, we had a great discussion, went a lot of ways as .1s do, and opened up some threads to continue discussing in this .2. I know each of you have some things to bring up. There's also a few key sections of the paper that we haven't gone into in much detail. So there'll be a lot to do in the .2. And of course, everybody watching live is also welcome to make comments. We'll start with just an introduction and warm up. We can say hi and add anything else that we're looking forward to talking about or to reducing our uncertainty about through today's discussion or maybe even if it's not too early, where we'd like to be heading as we take off on this .2 and leave the .46 series. How does it result in us being different, thinking differently, acting differently in the rest of our post-46 lives? So I'm Daniel. I'm a researcher in California. And just a conversation with Dean before we began about some other core concepts other than beliefs, desires and intentions. So what other acronym models could we use? What other kinds of models are consistent or where else can active inference be providing us extra information or utility? And a few other things. And I'll pass it to Dean. Thanks Daniel. I'm Dean. I'm here in Calgary. I think what I'm curious about is a couple of things. As I said, I'm curious about behaviors that appear to be targeted versus behaviors that appear to be non-targeted. And there's some reference to that in this paper. The other thing, as I said, I'm kind of curious about is this moving to the place that's saying that active inference has these different types, be it extended or decision-making or motor active inference and what the implications of that are. The bottom line for me is I think if we can operationalize that word through active inference, it shouldn't be difficult to figure out that that applies to behaviors, that there's operations grounding a lot of behaviors. So yeah, that's what I'm kind of curious about that I'll pass it to David. It's great to be back. It's my second time on here. Really appreciate this. I'm on 33. So I've listened through lecture 33, so I'm almost caught up to the current. I look forward to having Daniel on my channel Friday, 4 p.m. Eastern time. There's so much to active inference because it's a useful framework for countless different fields. And specifically related to this, I had definitely thoughts. And then how do we look at the mechanics of the equation versus, you know, we call folks ecology, but I would put that generally into what I would call introspection. I have a background in theology and I've thought a lot about introspection and trying to match what we see through various methods of introspection and maybe where the science and mathematics is leading us. And also I guess the field of psychometrics that isn't necessarily introspection, but it may or may not match the equations. And then how do we describe the phenomenon? You understand them or maybe they're not descriptables and that we describe them with the mathematical equations. And there's not really any good way to translate that into words that would be understandable to your folk, common people in that way. Awesome. Ali? Hi, I'm Ali. I'm an independent researcher from Iran. I'm very glad to be here and I'm looking forward to learn more about this fascinating paper, especially I'm interested in exploring phenomenological aspects of BDI, I believe Desire and Intention, and also to explore somewhat more deeply the philosophical underpinnings of representation versus simulation aspect of modeling BDI in active inference. And yeah, that's it for me. Great. Let's begin with this more taxonomic question about the types of active inference. So we can think about different types that we've heard about in this paper and elsewhere, and then whether or not we have something to contribute about adding a specific type to this list that will continue to grow. What does it mean or entail that we do have these different types? So just to review in this paper, 46, they make a very clear upfront distinction between what they characterize as motor active, MAI and decision active. And there's a few differences between motor and decision. Who can list one category of difference? Well, the motor active inference doesn't typically require that you have to make a decision. It's more a reaction or reflective form as compared to the decision active inference, which does, as the paper say, likely map over to the Desire question. Yes. So the motor AI was connected to like motor reflex. Are there decision making reflex? Does even if that's not a classical usage, like in the reflex arc of the spine or something, moving a hand away from a hot surface, are there cognitive reflexes? I'm not really sure. I just basically was asking in the context of people who are writing these papers now, who are bringing the question down to a motor operational behavior standpoint, how breaking active inference up into different classifications, is that in response to a bigger problem, which is the behavior doesn't necessarily, behavior doesn't necessarily isn't something that's necessarily we're able to conveniently dissolve down to our concentrate down into motor operations or extensions or any of those other different things that the writers tend to want to focus on now, as opposed to sort of the complexity and the variety and the randomness of behavior as a whole. Yes. Great. So this is from page 81. Oh, yes, I'll go for it. Well, I think the distinction between MAI and DAI is not globally accepted among, well, various researchers. For instance, if we look at the work of Alan Berthos, French neuroscientist, especially his book, I think it was called The Brain's Sense of Movement. In that book, he actually has a famous quote, namely, perception is simulated action. And by this, he means that he implies that perception and motion and motion and perception or thinking for that matter are irreducibly interconnected. And so the former, the former defines the latter because of the intrinsic matricity of the creatures endowed with nervous systems. So I'm not sure if we can distinguish between MAI and DAI in such a clear-cut manner, because as we know that most motor, most matricity and motor or actions are also predefined in that perception or many other thought processes. Thanks. Do it. Yes. So I didn't know enough about the BDI model or the, you know, I'm still new to the language, especially of the authors of this paper to the general philosophy. I was talking a lot about last week about the unified eye and higher order rules. So if you want to like decision as higher order rules and they would have some sort of power like usually when we think of consciousness and a unified eye, that makes decisions. And we talked about eliminativism and the implications of the equations. Are they eliminavistic or not? And yours is there? What direction does the evidence take us? And so in terms of motor control, like I mentioned, Fodor's modular mind that it could be that, you know, it's a threshold. It's a free energy principle. And it could just be action reaction, sense perception, and then a module of our mind sends, you know, some perception of whatever we understand consciousness that makes it appear to us like we made a decision. I mean, but that may not be accurate that the equations might hint that that's not actually what's going on. It's simply free energy threshold being reached. The threshold could conceivably have nothing or very little to do with the decision making process. And it could be post facto like the Lebed experiments that the perception like we made a decision actually comes post facto from the threshold. Cool. Thank you. Also, I think Adam Safron, who had discussed those Lebed experiments does find a space for agency if that's not too course of a simplification. But there is some interesting ways to think about what that experiment does or doesn't show. Maybe we can return to that when we kind of have some figures and details. Dean. Yeah, I didn't know what you guys were talking about with this a limitivist thing. Until I went back to the paper and then I found this and I'll just quote it. We also this is from the paper. We also motivate a squarely non limitivist position with respect to such constructs and suggest that the set of theoretical primitives out of which the active inference models are built is sufficient. Not only to recover the categories of folks psychology, but also potentially nuance them with more fine grained distinctions. So I don't I'm not sure exactly do what how what you've been saying either corresponds with that or not. But I'm kind of curious. Yeah, go for it. Yeah, I mean to me it's a fallacy that you're like a best of both worlds fallacy that a materialistic research into consciousness make is that in reality, my understanding of the their conclusions of what they're saying is that it is a limitivist and they'll put a little statement in like it's not a limitivist. But what is the evidence of their models actually say and can they actually program like a unified eye and you put that you did they eliminate the unified eye. And it's just a bunch of perceptions and various factors that reach a threshold and what does it mean to make a decision. So, and that's why I said do we just throw away thousands of years of dualistic theology and how people your whole lexicon of consciousness psychology comes from a dualistic tradition. So if they're trying to grandfather in these dualistic concepts. But even the evidence of what they're saying doesn't necessarily seem to warrant it. And then, you know, do they have the courage to be like no unfortunately the evidence points to a limitivism and go with that. So, I mean, my impression they can just throw in the statement, but it's unclear how how they're, you know how they're doing it because you know I guess, you know it's going to be unpopular to be an limitivist. It's kind of like the free will question that I mean limitivism also relates the free will, but you know to stand up and be like no I'm sorry free will doesn't exist. That's where the evidence is pointed. So to put in there that they're not trying to say that, but you know my impression I think they actually are trying to say that. That's where the evidence of their conclusions are saying and they just have you know human weakness that doesn't want to go so far as to follow the evidence of you know their research. Interesting. So just to continue on this motor and decision AI. So page five. MAI does not make decisions. Once a decision has been made about what to do. MAI uses proprioceptive prediction signals to move the body to carry out the decided action sequence and elsewhere we've talked about that sort of set point and then a transient suppression of precision which allows the body to reach this new spatial position. And also these models have tended to be formulated in continuous time. Whereas decision AI plugs in very nicely to the motor AI, because it is explicitly considering decision making and these higher order rules that have to do with either. If you take a more realist perspective what organisms or cognitive systems are doing, or the more instrumental perspective, which is just this is how the process of decision making can be modeled. And it's about what which decisions are made about what to do in order to generate some observations and not others. And it's that focus on the observations that are generated under different courses of action and their evaluation in terms of expected free energy that also ties it closely to like perceptual control theory. So that's the distinction they bring up and again in whether in the real world this carbs cognitive systems at the joint is one question in the world of the theories as constructed. This is a clean separation, although as Ali brought up that doesn't mean that it's widely accepted or even if everybody heard about it that they would necessarily agree with it. And there may be phenomena that are mixtures and that's also why they talk about the hybrid models where it's like decision making at a higher level and then motor at a lower level. So that could allow decisions about what section of a face to look at or what section of a page to read. And then that decision becomes implemented in a continuous motor active inference context through ocular motor action. Dean. Yeah, I'm not I don't think I'm I don't think I'm leading with this question about are we moving to a place that says that active inference will have different types because I think that there's different math. There's math that's used in financial situations, those operations, there's math that's used in mechanical situations, those operations. So I think it's it's a natural byproduct of so how do you make the math work. Given the circumstance I don't think that there's anything about this that's saying active inference doesn't work just because you want to make sure that you're applying the proper formula. To the thing that you're trying to describe, but I do think it's interesting that in order to be able to describe it more precisely, you just can't say active inference you have to kind of figure out what the math is for the specific condition that you're trying to analyze. Yeah, definitely makes sense. Let's now move to this lower part of the page on the different adjectives that we've seen. So people can probably add or think of more or even imagine there's so many adjectives in the in the textbook called the dictionary. We've talked about extended active inference, which was addressing extended cognitive features and phenomena where there could be sort of a offloading, which was when the adaptive active entity is going to utilize something that is more like a notepad. Like it's offloading memory onto the notepad and then the entity is holding on to information about how to seek that information. Or on both sides of the interface, there could be adaptive active and so that was like the sort of true emergence extended cognitive processes. Deep active inference and citations could be could be added didn't get them for all deep. Deep was utilized to describe temporal depth in terms of longer and longer time horizons of considerations. This was really an important development because it moved beyond the sort of one shot next step model of active and help manage this trade off between considering time horizons, as well as taking action in the time step that the entity is actually in. Sophisticated active inference and various other cousins of it utilized hierarchical modeling, which could have temporal depth at one or multiple levels but doesn't need to. It does and it could, but this is referring to like the thickness or the hierarchical modeling component. Affective active inference, which was used by Hespedal to refer to uncertainty parameters and some of the cognitive interpretations of uncertainty parameters as being related to valence. And then there's others like branching time active and it comes to what Dean said, which is just saying like we use active inference. May not reduce uncertainty enough any more than we used a linear model. Whether just talking about it casually or writing the paper more information has to be provided. And so this is sorting this is starting to starting to build the taxonomy and the classifications and the ways of communicating reducing uncertainty. Okay, you went to that hemisphere. Was it this continent or that continent? Was it this coast or was it that coast like zooming in, ultimately resulting in just describing the particulars of the model that was constructed elite. Well, adding to what Dean has just said, well, I don't think that these different flavors and versions of active inference that have recently been recently been branched up. They don't necessarily concern themselves with different situations or different niches. For example, that last one branching time active inference. Well, that was a very recent version of active inference, which was. Yes, this is the paper they introduced that and it's basically a more efficient computational approach to active inference. And it's just the traditional active inference, but in a more efficient with a more efficient algorithm to compute the all the parameters needed. Yes, sometimes it's these adjectives are describing heuristics or implementations or algorithms. Other times they're describing the model structure. Other times they're referring to the context that the model is being applied to like an extended cognitive system, Dean. Yeah, that's that's what I was going to ask Ali about. So if I'm saying something more efficient, what am I saying a more efficient relative to well relative to the traditional approach to active inference that has been proposed for the past decade or two. And also relative to the other the other approaches that are similar to active inferences approach, but not if not exactly equivalent to it. For instance, reinforcement reinforcement learning approaches or so on. And one other kind of analogy or mapping there would be like when a new variant of a neural network architecture is introduced. There's a pomp and circumstance to announce the novelty and the relevance of this for somebody who's in that field. Similarly, when there's a new variant or a new focus, whether model structure, model target or something else related to evolutionary biology, sometimes a new word will come in. But then that new target or model structure, if it becomes used, and it's sufficiently close that it's not rejected from the rest of the body of theory, give it's an incoherent usage of the adjective. Someone says, well, I made, you know, simple active inference, and it has this totally different structure. That might not become incorporated into the the network of active inference models, whereas if it does become incorporated through its theoretical compatibility or utility in modeling that phenomena, or it's presented in terms of a model architecture that does apply elsewhere. The adjective kind of gets dropped. And then it just goes back to being neural networks back to being active inference back to being evolutionary biology. And then the field moves on. And so the adjectives are just kind of like another brick in the wall. Do it. Yeah, I also mentioned last week, I'm not sure if it would go here in this section is the purpose driven act, the purpose driven behavior action or even perception related to a unified eye. And even if it's on the pure level of sense perception, is it just that we have, you know, these perceivers and they perceive, and then how are the signals read. And I mean, it's easier to describe this in an eliminavistic way, whether we're limitivist or not in the for the predictive mind is the main struggle so to say between short term goals and long term goals and then you have to put in what axioms are we making. Is there some sort of, you know, for the higher order rules, if we assume that there's some sort of rule that like an organism acts in its own self interest in its own, you know, sort of say desire to survive or to replicate. And then you the only struggle would be between short term and long term actions like, you know, God forbid here we have a mosquito pandemic. I was gardening and got, you know, eating up by mosquitoes and just a sense you're replying to the environment like I have an itch, should I itch it, or should I have some longer term they'll go away quicker. If, if I don't itch it, and is that you know something that's even in the subconscious, we're just naturally I would have an itch and I would itch it, but then I would somehow enter a longer term decision making process that would tell me to act in a different way, and even determine in terms for how I'm going to interact with the environment in the future, let alone something that might have a moral or ethical implication so if you have a simplified model of just where do we need purpose for sense perception to work and for motor control to work in terms of reflexes I think you're just even on the motor control, let alone the decision making you where is the decision making go and what's the purpose behind it, and then how is that purpose derived. Yes, so we can go to targeted non targeted. Another branch point is, I think this important question of mapping from real world phenomena, and even direct felt experiences like aesthetic or just more generally phenomenological and anything about a formalism. And I'm not sure if I've quoted to narrowly, but they are writing that intentions. So just to review the BDI the beliefs and desires come together. And that is what generates the intention. There's the desire for the ice cream and there's the belief that there's the ice cream truck and that is what generates the intentional behavior of like pursuing the ice cream truck. Intentions map straightforwardly to policies with the lowest value for expected free energy, which is calculated across different policies. Policies are affordances that can be taken over a given time horizon. So over time horizon of three, it's the combination of affordances of length three. Almost all the variables in the DAI model are candidates for traditional psychological beliefs, if being used to model the right kinds of high level cognitive processes. So what are the right kinds. But then later in footnote five, this is distinguishing in what they characterize as a different reading of desire. This reading is not the thing that is desired, but instead about the transient presence of the motivational force to approach the thing we desire. So it's like, why did you go there? It was my desire to go there. What moved you to go there? Desire. So it's almost like desire in the specifics and then also seeing it as a motivating force. But then they know that habit might not have a direction of fit and it illustrates another reason consistent with our broader argument concerning desires, not to assume that probability distributions in the formalism must be mapped onto beliefs at the psychological level. So they're candidates, if we do it right, but it isn't an assumption that they must be mapped onto beliefs. Ali? Well, I wanted to mention the broader sense of the term intentionality, which I think he was referred to, because you see, for all the entities defined by Markov blankets and defined by the existence of their Markov blankets and their ergodic behavior, well, we need an entity who can treat an effect as being about its cause. You see, an entity who can follow the direction from effect to cause. And we also need an entity who can treat an isomorphic map as being about what it maps, or let's say as oriented toward what it maps. Well, in both of these cases, the directionality or orientation can be referred to as intentionality in its broad sense. So just to clarify theirs, the second piece you mentioned was the entity needs to be able to treat some sort of isomorphic or structure preserving mapping as having an aboutness of the target, like the map of the city, translating that into the physical movement. And then there was a causal inference question. Entity can trace causes, such that they're doing some kind of causal mapping, where agency is the belief that one can be the cause of future aspects. So is this agency or is this causal modeling that could happen in the absence of entity level agency? I think it's more, it's broader than necessarily being a cognitive agent. You see, it's any abstract entity that can sketch out these kind of causal relationships between the signs and their either icons or their indices. So it's much broader and more abstract than necessarily cognitive agents. What kind of entities are you thinking of? And isn't some manipulation and that kind of logical juxtaposition? No, the one obvious example would be neural networks, neural nets. We don't necessarily bestow upon neural networks a cognitive agency, right? But they can well match at least their input patterns onto their output indices. And it's these continuums of agency, continuums of cognitive systems. It's like easy to forget we're talking about a cognitive modeling framework, albeit one that has a motor cousin, MAI, and motor modeling history, because the MAI distinction is also historical, like in terms of which models were developed when. And is this pancognitivism? Are we modeling everything as if it is on the continuum of cognitive entities? I don't know if it's as controversial to say that a neural network is a cognitive artifact. It might not be self-perpetuating or embodied or have open-ended agency or all these other features that some cognitive systems have. But what makes it not cognitive if you meant that? Well, probably the cognitive here was not a proper adjective. Perhaps what I meant was to be, I don't know, a kind of conscious agent. But again, on the subject of using the adjective cognitive or neural networks, well, there are quite a few opponents to this characterization for neural networks because they believe that neural networks as agents don't necessarily decide on their cognitive basis how to map between the inputs and outputs because that's a difference between representation and simulation if we can differentiate between those two. So probably the more suitable adjective here should be a conscious agent as opposed to cognitive agents. Does that not take us extremely far afield out of where we lose touch with empirical measurements, for example, and enter into important philosophical questions but ones that take us away from our cognitive leg? And psychology, perhaps even especially folk psychology, is sitting at the intersection of cognitive systems, philosophical self-reflexive systems. So it's almost like a question of this teetering stack of folk psychology, which way will it end up falling? Elimidivism perhaps is saying we'll be able to dispense with classical philosophical debates or some aspects of them by just moving everything that was folk psychological into a mathematical cognitivist framework. What I mean here is that the term intentionality here can also be used in abstract models of active inference and not necessarily well in the cases where we unambiguously deal with conscious entities. That's what I mean. I think one of the takeaways of this paper or one of the implications of this paper would be that if intentions in folk psychology are beliefs plus desires, hashtag BDI model and beliefs and desires are identifiable quantities in the active formalism then by constructing entities consistent with the active formalism it can be said that they have intentionality because it could be said that they have beliefs and desires and so then it would only make sense to call the behaviors arising from beliefs and desires to be intentional. So doesn't this conceptualization of intentionality also subsume the more strict or narrow sense of that term? In this sense if we infer from the use of intentionality here as referring to only conscious agents well in that case the whole purpose of simulating active inference agents could potentially well raise some questions. I'm not sure if I could express myself clearly on this matter or not. Very interesting. Dean then do it. So maybe amongst the three of you you can help me with this because I don't really keep up with the latest news on neural nets but if I were to look at a neural net I would assume that most of those nets are given some kind of a target even if they go out and forage for it there is something about that but what I've been always been curious about is especially in the context of this paper is that if there is a target that's been sort of linked to desire in this paper and if there's a curiosity aspect to the agent or the net that's been sort of linked to non-targeting. So if that's one continuum the orthogonal aspect of that also in the paper is that there's a reward-seeking component say on the left of that there's an information-seeking or an information and ambiguity component to the right of that. So here's my question. Unless we say that curiosity actually is a target which this paper says it's actually a non-target and the operation and the formalism actually points that out then how do we set something up like a neural net to actually be curious? Because we can be curious, we can be non-targeting and find reward in that if we go one direction and find information gained in that in another direction and they are completely different things. They're not seen in the formalism as the same thing. So I'm just curious whether because as I said I don't know enough about neural nets to know but aren't neural nets constantly constrained by having a target? I think there's many ways to answer that. I am not sure if it would totally map on to the difference of supervised and unsupervised learning like supervised learning being where labels are provided. These x-rays belong to this patient category. These ones belong to the alternate category. I want you to get really good but you know don't over-specify, don't over-fit that distinction. Supervised learning. Unsupervised learning is more along the lines of like here's input according to how you're set up embody the statistical regularities and the structure of the information that has been input and then that representation allows like clustering and subsequent analyses or the addition of labels later but it isn't predicated in the training phase upon the data or the cases being labels. Does that? Yeah but the predication is not curiosity is it? That's my point is that there is no curiosity in what you just described. Right it's like a pipeline that is receiving whatever information is being provided and curiosity seems to kind of break out of that and it's like give me more examples and then the directed curiosity would be like I'm looking for more examples of the label A. The undirected curiosity would be more like I just would like more examples. So to ask Allie maybe Allie is it consciousness if it doesn't have curiosity? Well I don't think so because you see even in unsupervised learnings we don't necessarily, well the neural networks don't necessarily extract the essence of their training examples. What they do is to extract a statistical prototype of their tokens the statistical prototype of their training examples. So actually they can't recognize a dog as a dog. They just see patterns they just see statistical patterns which they've learned through their training process. So they're not necessarily conscious of the object of dog or cat or whatever. What they try to do is and in fact in some cases well sometimes we see that they have in fact inferred wrong statistical prototypes because of insufficient training examples or well not appropriate training examples or so on. But in any case I don't think we can say that well neural networks because of their ability to recognize these patterns they necessarily will have consciousness because well probably in order to achieve consciousness well other layers of cognitive and even perceptual mechanisms need to be deployed. Yeah I had some other points but if you wanted to you'll stay on the curiosity topic and you know it's related to what the point I was going to make about belief and desire clearly defining the words. So if they're specifically you know they're going to take the BDI model and use words like belief and desire to you know some relatively obscure scientific model versus you know like a whole historical review of the etymology and you know I mean and how do we how are we defining beliefs and desire you know just curiosity have a desire to learn something for its own purpose that means it's not goal driven and then how I would understand belief and desire in terms of you know so desires are biological urges like you know reproduction or survival and if of the emotions or maybe all derivative how you're going to put that out where if you're going to try to source all of them from one pure desire and all the other desires are derivative or if you're going to have multiple desires and then you'd have to have some sort of weighting function to weight the different desires and then beliefs are self made rules that come from our own cognition and that's where decision making comes comes in and even you know I think a rabbinic statement in terms of free will that free will is implying that you have the choice to make the less desirous opinion that the less desirous choice like as a chess player if you're thinking like a candidate moves and analysis trees that in reality our candidate moves are just how we control our body and you know do we move our arm up or down or the various controls of our physical function but you know what are the candidate moves and then how would the analysis tree and decision making is it you know how would it come that you would make the less desirous decision is it purely you know let's say short term versus long term and then the organization of you know let's say a system like ok I'm a Jew I'm a Muslim I'm an atheist or I'm a capitalist I'm a communist that we have these self directed beliefs that control our behavior and if it's learning to do things like I can't steal God forbid I can't rape I can't kill and that like my desire if the body tells me ok I see attractive woman I want to have sexual relations but then I have these beliefs like that there's a system of rules and governance that is it purely because I can't get away with it that I don't do it we don't rape still and kill because we can't get away with it or is there this self imposed moralistic behavior and that's coming down to what belief means and then you know also if you know so curiosity you'd have to clearly define curiosity if you're talking about neural nets that you know computer just learn for its own sake and even can a human just learn for its own sake that you know we're participating these discussions purely because I enjoy figuring things out even if it has no practical implication to you know some primal directive of a survival or reproduction or however you're going to understand these primal directives and then you know the equation be similar to a chess algorithm per computer where it would appear that you could add in waiting factors that you could program this into the equations and you have some constants that would be waiting factors so if you did have just one primal directive of survival or two primal directives of reproduction and survival and then you could give those waiting factors or you could have a feedback mechanism with the environment and then you know as I said where do the belief comes in the self made rules and then how are those waiting factors calculated in decision making and what does it mean to make a decision is it possible that we make the least desirous decision and if we can't make the less desirous decision you know do we not have free will thanks so Jonathan shock made a great point which is that deep neural networks applied to reinforcement learning can include curiosity as an explicit factor it guides the agent to states which it has about which it has the most uncertainty so it's a term which makes it rewarding to explore currently unexplored states and then just to provide a recent example here's a way in which a neural network architecture was used certainly worthy of more exploration and at a first pass or at least just qualitatively one way to think about that and then we'll come to how curiosity was described in the text is in active inference we have a imperative for action selection that includes a pragmatic and an epistemic component whereas a pragmatic absolutist perspective like reinforcement learning needs to generate these modules and alternate architectures such that information seeking curiosity can be coerced into a reward framework I mean what is the next question going to be well it's rewarding to gain information oh what kind of information how much information how are we going to learn the parameters on our curiosity model and so on and so active inference as a first principles approach for this kind of artificially intelligent system allows us to include at the base level both an objective achieving and a pure curiosity reducing element Dean and then we'll look at how they talk about curiosity thanks Daniel so as is as is historically been the case for point twos oh he would just left and I was going to ask him this question okay I guess he's still here so David when after the one and now just what you were just sharing with us here I wondered without putting words in the mouth can I get a clarification I think one of the things that you're trying to make clear or at least bring into the conversation is that prioritization which is the sort of the value waiting aspect of how we see the world is not ordering the kind of placing of a formalism down on paper that includes rules like equals and plus and minus in the FEP or the sort of extended and variation of the energy piece is that over simplifying what you want us to make clear that that having a value waiting is not the same that kind of prioritization is not the same as being able to formally set down an order I'm a dualist so like I'm trying to I think I could use these equations for dualistic and your purposes and you could just factor in a waiting factor for these higher order rules and then you know so my system would have some sort of struggle between two separate realms of material and spiritual realm so that's you're personally how I'm looking at it but you know I could attempt to understand it from just the materialistic realm and how the equations you would apply and then if there's competing factors what's the origin of those competing factors so if you just put your desires are biological animalistic urges and beliefs are self created rules of cognition and then you want to have a separate function like curiosity where curiosity maybe benefits the purpose of making beliefs which would later come to become higher order rules and or if you want to have the ability to have behavior that has no purpose that you know I just enjoy understanding things maybe that understanding will help me make better higher order rules and then saying oh like you're saying for the equation here we have these equations and it's free energy and you know so the purpose is to minimize minimize you'll at least action principle to minimize a limit energy usage and so if you're looking like well why did I do that was like you know dummy didn't you look at the equation it was it was the free energy principle you did to minimize energy usage and you know so we're just saying that that that's you know the origin of all human behavior from an eliminavistic sense and you know from the folk psychology that's what I said I'm taking a dualistic perspective where these higher order rules are coming from a non material plane but I think they could be modeled into the equations thanks also free energy minimization doesn't entail the actual minimal possible energy usage though some people have made that accidentally or intentionally acclaim it isn't what those equations as they're being used here necessarily represent but it could there could be a system that thrives optimally based upon reducing energy usage but also their situations where free energy minimization and the path of least action are not the lowest energy consumption was that short term versus long term they're saying long term it is but it's just short term versus long term decision making in the as opposed to a dualistic the predictive mind that's saying it's not dualistic spiritual versus material it's a single materialistic in its short term versus long term okay let's think about that in the context of curiosity how they're describing so G expected free energy of policies it's like the policies are enumerated and then G is calculated for them reflecting third value of each policy based upon beliefs beliefs are P of O sub T conditioned on pie so the distribution of outcomes conditioned upon policies if I go left what do I expect to see if I go right what do I expect to see that's a belief and the belief distribution about observations conditioned upon hidden states of the world if there is food there then what observations will I expect to see and desired outcomes which is as we've explored in 37 in users guide there's this dual implication of the P of O distribution and active inference as what is expected as well as preferred because we reduce our surprise about the expected preferred outcome distribution and that is how the desired outcomes are realized not by providing a higher reward or value to increasingly preferred states but rather treating the preference distribution as expectations which allows us to use surprise all and surprise all bounding approaches towards achieving preferences so how do they talk about desire and curiosity they're suggesting that these drives so this is in earlier we talked about there's the two ways to talk about desire the specific target of desire as a desire like I desire a cube of tungsten or something and then there's the drive that is associated with desire what made the person do it desire in the abstract curiosity drives discovery of what is not yet known and thus has no preconceived target however one could also say that it has a semi formed concept of target because one could be like curious about something and then their curiosity would be satiated by something else something but not something else so in that case there was some of something of a prior on what they were looking for the claim that curiosity is not cognitive so desire related might seem suspect on the grounds it could also be characterized as simply the desire to learn qualitatively and this in turn could be conceived as the desire that one's beliefs are as precise as possible even though especially in the short term learning sometimes feels like it reduces our precision however the mathematics allows us to motivate a genuine distinction here specifically changing outcomes to minimize the KL divergence in the risk term in EFE is a fundamentally different sort of process for minimizing the ambiguity term so the KL divergence is on the left side here and the ambiguity term is on the right side here here's equation 2.6 from the textbook and it's in a different order but just to see here's risk with the KL divergence and then here's expected ambiguity with the H for the entropy calculation the latter which is the ambiguity term does not care how uncertainty is resolved so long as resolved there's no preference for becoming more confident in one possible belief over another whereas the reduction of uncertainty by reducing the KL divergence is what is bringing us into alignment with our preferences slash expectations hence that is a much more targeted form of curiosity in that it allows for goal directed information seeking and then they write at a minimum the folk psychological concept of curiosity corresponds to a very different type of desire with a distinct counterpart in DII Dean yeah that's perfect because what I wanted to say was that I think that the curiosity really points to that risk and the reason I say that is because you may find that when the ambiguity is removed the prize is a happy one or a not so happy one and I think that's kind of important here to see sort of the it's hard to say as it says here it's however the claim that curiosity is non-cognitive might seem suspect on the grounds it could also be characterized as simply the desire to learn well that before the learning you're sort of in an ambiguous state will it be a happy surprise or will it be an unhappy surprise and so I think this this kind of pulls that out you the ambiguity applies a certain amount of risk but then you have to you have to act in the active inference aspect of it to determine whether or not the surprise I'll let the other side of it is actually a good or not so good so I think that's where the curiosity arm of this comes to the fore it is non targeted until it actually has been realized or somehow materialized whereas the the targeted stuff the desire stuff and be part of your plan I'm not sure whether the curiosity fits as conveniently as the desire does and just one short note on that is we we see a conditioning on policy right in a few different places so it'd be interesting to think about where it is planning to learn come into play and in what ways is planning to learn directed or what if you plan to have undirected learning or all these other combinations do it yeah I'm not sure if it would exactly fit in with the equations I'm thinking here like Maslow's hierarchy of needs defined in terms of the free energy principle where you know so to say curiosity requires excess energy so if you're on the first round of the hierarchy that you have limited energy to apply towards curiosity and if you do have energy to apply towards curiosity you're going to be best off to be curious about things that will directly apply to you know your security you know your food or the primal directives as opposed to you know if you move up the up the hierarchy at least put this into Maslow's hierarchy where there you know there's less risk associated with curiosity that maybe you've achieved more leisure activity and then and therefore have more energy to expand towards curiosity that you could reach the highest levels of Maslow's hierarchy and focus on you know ideals and what is the meaning of truth or understanding science but it would still be you however you're defining the free energy principle that you have some sort of conservation of energy and curiosity is an expansion of energy that is limited but you have more resources could be allocated so to say energy resources to curiosity if it makes the whole you know unified person more efficient and you know they become more secure more safe more social except so on and so on and therefore they have more energy to expand towards curiosity thanks Ali I also think it might be helpful to mention the neural basis for these surprise appraisals because you see as we know we have two systems of thinking system one system two things Lee sketched out by Conman and Tversky or fast track and slow track of reacting to stimuli so in the fast track well the stimuli bypassed the sensory cortex and to go directly from Thalamus to McDowell right so our first reaction our immediate reaction to every surprise would be something quite similar to fear but then after appraising that stimuli stimulus after going through the sensory cortex and then amygdala and then hippocampus then we can appraise we can evaluate that surprise as being positive or negative and as a result of that we get a positive or negative balance effect so the in summary well every surprise I mean either positive or negative it could be at first and at the very first moment be kind of negative can induce a negative balance emotion in us yes I brought in some of the quotes from this paper habitual and reflective control in hierarchical predictive coding it's from 2021 and it's by King horn millage and Buckley so they directly connect hierarchical predictive coding architectures which when they consider action have a lot to do with active inference in terms of thinking fast and slow like you had just brought up so they wrote on this view fast actions may be triggered using only the lower levels of the hierarchical predictive coding schema whereas more deliberate actions need higher layers we demonstrate that HPC can distribute learning throughout its hierarchy with higher layers called into use only as required so that is very interesting when we think about going back to the MAI DAI distinction from this paper it's a hybrid model albeit one where the motor spatial control is continuous or model that is continuous and then there's a more discrete more mental or cognitive model that's like thinking slow so it's almost like there's thinking fast and slow even within the mind and then there's thinking with the body and that's like even a lower level I don't think even by thinking fast they weren't talking about the reflex arc but there can be hierarchical modeling even there so what are these nested models that are slower how do those develop are they related to the physical architecture of our onboard machinery to what extent did they involve so many of our off source or extended cognitive phenomena like our colleagues like our social networks our books and learnings the social structures that scaffold our beliefs and so on there's a lot of interesting pieces there what would be a good direction to go in this second half of the dot to we had targeted non targeted is there anything there that's not covered by the curiosity discussion is there oh unmute then yes go for it sorry about that yeah I think we pretty much okay wrestle that one to the ground okay do that my conception for targeted I'm calling the unified I in terms of you know maybe like the ant colony where you if you you could look at it like the answer all functioning personally with their own your directive behavior and it just works out that they're directed or or as a unit but even in like a human whether there's a unified of the human cells like even sense perception that you know it's easier to look at it like our are you know like I mentioned we have 100 like almost 100 million cones and rods in our eyes that are receiving photons at every given instant and we just look at it like they're active acting in a unified manner but even that requires some sort of targeted behavior that the you for even direct motor motor control that our cells and our given body are acting in a unified manner and could you take the free energy principle to a cellular level and say well cells will act in a unified way if it's in accordance with the free energy principle but if it wasn't in accordance with the free energy principle then cells don't act in a unified targeted way and act according to their own levels I'm not sure if you could take that to the cellular level for even things like sense perception motor control like one arm wants to go this way and one arm wants to go that way and how do we have this unified control so it's interesting thing about what would it what would make something concord or not with FEP because it's a description framework here is where in from live from 25 with Professor Levin here's even in the ant colony example where different subsystem and cancer and and like other situations like biofilms were modeled this way that there can be coherence within each tone so to speak I don't know if this moves as closer to understanding the experience of the eye or whether there is in fundamental unified eye but the cells in the nest mates body can be understood as undergoing coherent behavior the the nest mates can the colony can and then different people have taken different perspectives on what makes one level of analysis more relevant than others like an evolutionary perspective is that the unit of selection is granted some special power a information processing perspective or integrated information theory might highlight what happens informationally within and across levels let's look at the air their dark room experiment we kind of came to it a little bit last time but what did anyone think their figure and simulation spoke to how was it adding details or information that weren't present in the more formalism and qualitative driven other sections of the paper so I have a question I don't know if I have an interpretation of it when others when others looked at this as it's now craft out did they see ambiguity removal or did they see and an entropy question being resolved because what I saw was the ladder I I kind of saw that something something appeared with more information gain or something disappeared if there was no sense that that the that the change of an ambiguity state was going to make any difference so I'm kind of curious what others how others interpreted the the actual data as it as it laid out post hoc because this is all post hoc this is just comparing dot dot seven dot three to dot nine nine dot zero one in terms of the in terms of the desire strength and I think they use the term urgency in the paper going up right yes well on on the left side okay so time is there's time is moving from left to right there's an entity that doesn't desire they don't have a preference for achieving ice cream and in this simulation run they observed that they were in the dark room and they picked the decision to flip a switch they then went to the right and ended up not finding that they got ice cream the weak desire agent first lips a switch and then with higher confidence goes to the left and they end up realizing ice cream in this case of the strong desire agent even though they have uncertainty about going left or right initially they're in a hurry or surgeons and they go left immediately and that turns out that it does achieve getting the ice cream one one no is like these are only showing one run of each of those simulations so it's like does the no desire entity always go to the right or is that a 50 50 how have people use the BDI to do quantitative modeling of behavior are they moving their verbal and formal arguments from above in the paper into a form factor that will be more amenable or palatable to people who are already using these models to do behavioral analysis well I think the BDI I mean strictly from the standpoint of here's what we're watched this agent do it kind of it kind of had a 50 50 choice it was playing more in the geometry space how this is represented is more sort of the Bayesian distribution piece right so you've got a percentage of an area as you're moving from left to right that's either darkened or remains agnostic it doesn't it doesn't sort of change what what the outcome or what the matching of an outcome and the expected free energy are so I think that it just brings more information to the to the observation of what's actually going on when when the lights either do go on or don't go on because that first one the lights don't go on I think and the agent either stumbles to the right place or doesn't but they don't care but I do think that what what happens is you can have geometry only or you can have deep geometry and a proportion or distribution and I think that's what this is kind of laying out for us that you now you've got more information and with more information hopefully you've got less less invariance you've got more variance being described here than you would if you left it at the geometric bubble alone so that makes sense yes and they they follow up after the figure that this magnitude mapping it's giving not just some theoretical detail to this mapping and they provide several citations many recently with Smith and others where it's used in many different contexts and then I kind of found it also interesting in the following sentences after the paragraph that I had just quoted there this motivating force could be corresponding to the magnitude of the KL divergence KL divergence between the Q variational distribution of observations conditioned upon policy divergence with the preference distribution P of up within the expected free energy so that's one of the terms that we've looked at in the G of pi i.e. the risk term not the ambiguity term with the entropy this is because stronger preference values over rewarding outcomes in P of OFT increase the KL divergence and lead the agent to seek reward over information gain so there's still a knob maybe multiple that are slightly interacting there's a knob by which more pure curiosity that would be just the ambiguity resolving but undirected mixtures of pure curiosity and directed curiosity weak preferences with a lot of openness and increasingly sharp preference distributions induce increasingly urgent pragmatic value seeking outcomes and there's a lot of funny things that come to mind like a question for one person might be a survival question for another it might be a research question for another it might be like a pure curiosity where's the oxygen in this room is that a survival question is that just again are you researching oxygen distributions in rooms so you have a target to understand that or are you just curious about that thought Dean then do it I think Daniel that clearly speaks to what David and Ali were both mentioning in terms of so what's your time frame here right so you take off and you have a bird strike and now you don't have enough time to land anywhere on the river is that curiosity is that desire is that what is that in terms of how much time do you have now to manage the risk aspect of this the situation that you now find yourself under certainly survival but yeah what what what percentages or what proportions of each one of these different parts of the equation are being calculated at any given moment relative to the overall time frame that you're now constrained under so yes do it let me just look at the refrigerator example that well okay had I been curious earlier during the day about the layout of the room I might have remembered how to get to the refrigerator but I didn't expend that mental energy so here I am hungry and I don't know my surroundings and so you have the level of curiosity could have provided benefit at this point you have the risk factor will I injure myself or eat and I was thinking like chess or something like okay like the alpha zero is programmed with the you know the most powerful computers for hours and hours to just understand the game and then later it's able to take that and win so like as a person it's like well I have to reproduce I have to see to the success of my children and offspring I have to you know make money and survive and so you know maybe I could become a professional chess player or something like that but then you would come in and well if my goal is to you know reproduce and see the success of my offspring and you know make money and survive there's better methods to do that than becoming a professional that works but it's probably not going to help me reproduce or make a living and if I try to you know take my curiosity about consciousness and use that to benefit the reproduction or making a living it's unlikely to see that much fruition but at the same time the power of curiosity is you don't know how it's going to benefit you so the ice cream example we had a meeting in the kitchen you know during the day and then we all go back to our rooms to go to sleep and then we're hungry at night the person who was just curious about how the room was laid out is going to be able to you'll likely retrieve the ice cream without any injury as were the person who didn't expand the energy to be curious about the layer layout of the room may not achieve but it would be in folk psychology it would be to map this to something like Maslow's hierarchy of rules which would be in line with neural nets that there would be some sort of higher order rules and some sort of ability to put the expansion of more energy like I thought about chess and now I'm winning and I've achieved a little bit of social standing or won a little prize money and now I have a bachelor's degree and I have enough proficiency to get a job and support a family and now I could exhort even more energy to it but at the time of the pure curiosity you don't know how it may actually benefit you in terms of reward great points that it's almost like jumping through different hoops what it might be best accomplished through pure curiosity at a future time point so in again that survival research and pure curiosity trilemma there's times where being driven by one or the other helps us move through complex real world situations if we knew the whole story from the beginning then we could just follow along but in a highly partial information world where we don't want our priors to lead to ruin basically then pure curiosity and research help bring things within our survivable limit into information gathering spaces that we wouldn't necessarily have traveled to limited resources I don't know if they could program that also into the computer also like the limit of resources and if that if you interpret free energy principle in general to you'll be a definition of limited energy resources Ali well yeah adding to my own previous comments about the subjective consciousness and curiosity I personally believe that the nature of consciousness is ultimately computational so it's amenable to modeling and simulation but we're just not there yet but well one possible approach to to achieve such model could be potentially the active inference approach or other related approaches but you see curiosity is just one aspect of consciousness and I think it should probably be helpful to distinguish between the artificial curiosity and the experience of curiosity or the phenomenology of the curiosity because what we see here is the agent that simulates the artificial curiosity and it's not exactly isomorphic with the experience we get from our derive to our information seeking drives and and in fact in this paper I think it was in page 28 yes it's in page 28 that they state that we have not addressed questions about the structure a generative model would need in order to account for experiences of beliefs and desires so in this paper we're not dealing with the actual experience of curiosity we're just trying to somehow model and simulate the artificial curiosity in fact this term artificial curiosity is one favored by Carl Pristin himself in his various other lectures yes very good point and it relates to kind of the experienced view of the outside versus the behavioral modeling view from the outside a curious particle or a curious forager might be engaging in some kind of behavior but that isn't the same thing as the experience of but they do provide references to other work like including work that has been discussed previously on live streams with Christopher White and Ryan Smith and report paradigms of consciousness as we kind of head towards the end here I thought this was very interesting in the section before the conclusion 11 free energy principle and direction of fit and this is kind of a good section to carry out and bring forward into our other discussions on active ready organism described by the FEP the description will contain an implicitly normative state about the states the organism should be in and can be described as having connotative desire like direction of fit so there's a preference distribution and as this paper is building up it can have an interpretation as desire being a targeted desire even beyond the case forward looking decision making agents not all Bayesian beliefs in the FEP formalism are best characterized as doxastic being about belief doxology this perspective may expand upon current discussion in the FEP literature where it is fairly prominent to discuss as if beliefs in simple systems whether a mere active entity like a pendulum or adaptive active entities and just say well I'm not saying that it really believes that it's acting as if it believes that that conform to the FEP here our point is they can also equally be described as having as if desires belief entered the active lexicon slash ontology perhaps because belief is used also in Bayesian statistics where distributions more generally are called beliefs also disjoint from the phenomenology of experiencing a belief but rather that just called belief updating is like a process in Bayesian statistics this paper kind of closes the loop with belief not towards phenomenology but stapling together some of the Bayesian distributional usage of the term belief with the doxastic being about belief the aboutness of those Bayesian beliefs is the folk psychological concept of belief by making that mapping and connecting the formalism as a whole or at least in the core equations like the F and the G to the BDI model it also allows them to identify which Bayesian belief distributions map to belief in the BDI and which Bayesian belief distributions map to D to the desires in the BDI and so to the extent that somebody can say a system has an as if belief implicitly in a folk psychological way we can also talk about as if desires and that helps partition but not disregard the discussion around what different systems experience the classic what is it like to be X but it at least according to this argument and perspective authorizes usage of intentions and desires intentions arising from basically the combination of the beliefs and the desires in the BDI and talking about active systems as if they have desires what is that simulation desires of and then using its beliefs and desires to explain its intentional behavior not the experience of intentionality experience of belief experience of desire but using them in this view from the outside more deflationary perspective so that's quite interesting and I think something that will be we'll see how it catches on are there other sections that people want to talk about or what will they carry forward into their post-46 life I think Daniel a couple things I think the desire part that you're just talking about there I'm not sure that it's specific in nature I think it's more generalizable and sort of that scale free to to some of the stuff that you had brought up I think that when you when you start getting to know active inference a little bit more and I'll speak from personal experience with a focus on a little bit of sort of personal grooming you become one heck of an attractive state and so I think you might not make money off of that but at this point but I think as you get more comfortable with the kind of being able to take the information and use it across a variety of different situations it's it's actually is quite helpful I don't think I'll ever get rich off of it at this point but it is kind of interesting how it's I think it is growing in terms of the amount of attention that it's getting so that's kind of my my two takes on this so far so you're saying it might influence fitness I think it might down the road as long as we have enough time scale time scale I've been coaching chess for a while and mostly to youth and I'm used to explaining somewhat you know these concepts like I want to say that people who are just curious about chess and how the patterns work and how the pieces work often improve much faster than people who desire to win and that a lot of times people will reach a certain strength and then the curiosity diminishes and it's just their desire to win which isn't enough that you know it's just like I want to spend my night thinking about these patterns versus like I really want to you know win and have some sort of practical benefit from my efforts so I have my own personal goals that brought me to active inference because I was looking for my own tools to advance my own research and came on to active inference as a a method or tool that even though I thought you know like it appeared that you know the majority of people are using it for a different purpose that I would be able to rework it for my own purpose but because it's a interdisciplinary study and like you know I'm an engineer and like engineering usually works that way where someone has professional certification they sign off and you could expect that they did their part right and it's going to work and you could you know attach that module to your module and both modules are going to do what they said they're going to do and then you know the curiosity well do I really want to understand how your module works you know do I really care about Dean and Ali's research or you know do I just you know want to trust that their module works and I'm going to be able to attach that to my module to you know make something more powerful and both are probably true to you know to your limited energy limited resources that you know that I would like to understand Dean and Ali's module to the best of my ability but energy time allocation resources are limited but then you know certainly if you had a module that would help my module and we could plug them in together and you create a more powerful interface so in that I'm bullish about active inference and as a framework that will benefit you know the more and more people who get involved and I'm seeing my own direction and my own research how you know I think this will be useful and I even think I got to Jennifer I think joined the discord and I mentioned to a few other people that might be able to advance their research and then try to understand you know the current existing modules you know that are out there you know within the active inference you know as I mentioned interdisciplinary studies for you know big issues you know none of us are going to figure this out it's going to take interdisciplinary studies and none of us are going to be able to understand all the disciplines necessary to piece together the bigger puzzle nice thank you yes interesting point about the curiosity driven learning and how that can be like a not even paradoxically more successful approach or mentality to set off on a learning journey with I just think about somebody trying to learn organic chemistry which I tutored for many years and people who set themselves up for the pragmatic value and said I want this outcome sometimes they achieve the outcome other times they didn't but it set up a lot of psychology around not achieving the outcome when it didn't happen on the expected timeline and a lot of like secondary cascading failures however there's also the clear counterpoint which is but if we don't want the higher grade then like if we're just going to be curious about learning organic chemistry like my finals in six weeks so how are we going to balance the timelines and the curiosity and I think maybe even this paper helped us look at one way which is keep the preference direction but understand how the urgency of your preference influences the way that you take action and just think about what if your final was tomorrow how would you go about it okay now what if the final was in 100 years how would you go about it where are we and not continuum do we have time for curiosity and for openness or is this an actual emergency situation and then we can triage where there is an emergency organic chemistry situation but also hold space for curiosity and openness and even in the emergency there has to be information seeking it's just more directed information seeking okay we're going to work through the practice tests and that's what we're going to have time for there's still going to be information seeking but not of the same kind if we had more time to kind of space out even though in an alternate slash preferable world perhaps that's the better way to learn the more fun way to learn or so on any other thoughts one of the things I really like about active inference research is the willingness of the researchers in this area to seriously address the criticisms instead of just dodging them and their willingness to constantly improve and update the theory in order to fill the gaps and well and also address the well the weakness or any weakness or loophole in the theory and that's very it's liberating to see such active area of research going going forward in a very determined and also very open minded open minded way and well that's one of the things I think sets it apart from other some other research areas today it's what we prefer it's what we expect we're curious about it okay Dean Ali Dhuvid thanks a lot for this fun discussion and Ryan Maxwell and Alex for this paper it really capped off a long run of live streams we've had quite a nice sequence going back to the beginning of this year getting through many many discussions next week we're going to have the second quarterly round table meeting and then we'll have July off of paper discussions and we'll be coming back in August with paper discussions we don't know exactly what they're going to be but we know some of them and we have a few guest streams and other streams coming up but yeah it's been a nice run the live meeting next week will be a great cap to this season slash semester and then we'll come back for the end of 2022 with some new changes that I think people will be really excited about so thanks everybody and see you later thanks Daniel thanks everyone