 Okay, it is May 19th, 2022, and it's week three of the textbook group first cohort. We're in week three discussing appendix A and B, and there are some notes in the sections of the book in the chapters. There are also some ideas and questions that people have raised. We'll go to the questions and then start with the most upvoted, so feel free to add more upvotes if you want to discuss it, like even in this discussion, and then hopefully if people are available to take notes in this section that will help add their thoughts in and also capture what the speakers are saying, and then we'll look at the question and then try to come to different answers and just add more information that people can add more structure to later. Okay, the first question says, appendix A is described as a mathematical background. So maybe question one, for the authors or for you, what is the process of determining what is figure and ground for the formalisms of active? What other math concepts and formalisms are important for learning and applying active inference? And then three, what are some resources and approaches for learning math that help us learn what is useful for active inference? Anyone can raise their hand or we can just start to add some annotations here. Like what should be included in the primary regime of attention with a reading of a book, either linearly like some books are, or in a maybe moving around the sections? So what should be in the chapters? What should be in the appendix? What is not in the appendix? What did people expect would be in the chapter in the appendix not covered? Yeah, Jessica, and then anyone else? Yeah, I was wondering about the multiplication of the matrices that we covered yesterday. Which equations have the multiplication of the matrices? Like a couple of examples just so I can like play around with them. Like in the actual like active inference equations. Does anyone know one? This is referencing the way that the appendix a is starting with linear algebra and then introducing this operation of multiplying two matrices or to get a product. If someone can find like an equation from the that we already seen or some other equation while I'm typing up that question that would be helpful. Did anyone just describe what they thought the intention was of starting with linear algebra and using 8.1 as the first equation of the appendix or we'll return to just the more general questions? Yeah, I mean linear algebra is kind of the most discreet. I don't want to say fundamental but like practical and comprehensive way of kind of working with a large space of data, I guess, together and computing on it. So it's kind of the basis for the discrete parts and maybe easier than the non-discreet parts. Good reason to start. I guess it's a question was where is a linear algebra used in the active inference math? When you go to appendix B then the equations of active inference and these are all expressed in terms of large vector spaces of variables. So probability distributions and so for example on page 245 you've got this dot notation which is the expectation of a value. So that goes directly back to that first section of appendix A is what does that mean in terms of how do you take an expectation of a large vector of things? How do you express that compactly? Thanks. Could you unpack, we looked at the dot notation a little bit in yesterday and they were mentioning how, well I think it was actually one of the questions, let's just see if someone asks this. Okay, they say the dot operator in A3. The dot operator is equivalent to standard matrix multiplication where the first matrix has been transposed. So what is the relationship with expectation? Like how does expectation, for anyone, how does expectation relate to linear algebra? Yeah expectation is the probability of a value, the average probability of some value. So you're going to take all the possible values and you're going to multiply them by the probability of that value and then divide by the sum of the, well probability of normals is to one. So that will be the expectation. So if you have a whole array or a vector of, you know, in your distribution you can express that all in compact notation by saying with this simple way of saying we're going to take every one of these terms and we're going to multiply it by the probability of that term and the probability sum to one. So then that will be your average overall. Awesome, thanks for that answer. Okay, thanks for that awesome answer. Does anyone have any other thoughts on just this first question and then we'll continue. So what is figure and ground? We're learning active inference in the chapters. That's why the chapters are there. The appendix is there to somehow lightly supplement that, like they describe it as an introduction or refresher to the basic mathematical techniques and linear algebra is the first section A2 that is discussing a lot of important things like derivatives and probabilities, Taylor series, variational calculus and stochastic dynamics are going to come in in the coming chapters. But the linear algebra is important for the upcoming chapters. Okay, so this question asked in A3 they said the dot operator is equivalent to standard matrix multiplication where the first matrix has been transposed, which is flipped, like it's the operation in Google Sheets or Excel, right click, paste transpose. They say that in the case of column matrices, this is equivalent to the dot product. So the dot operator here is like a generalization of the vector dot operator, the sum of the products of the corresponding entries of the two sequences of numbers. So the questions here were what is interpretation implication or use of the dot product of vectors? And then what is an interpretation implication or use of a more generalized dot product? And there's looks like there's a awesome answer. What would be a situation where if anyone can think of one like the dot product would be applied? Like, you know, we're just we're trying to do this in this situation. This is the operation that we needed. We wanted to compute matrix A on something of something. We used B dot C. There's continue Eric's description from probability like you have expectation probability and you wanted to kind of see the error or whatever the difference, I guess, between observation and the expectation, then a dot product would give you that answer. It's a probability density of it. What are some ways to compute differences? Like, you can, if there are scalars, you can subtract like five minus three or something. Then it was mentioned of cosine similarity. Does anyone think of or or imagine like another way that you could compute the difference between two different like kinds of features of different dimensions? Like another example is divergence. Because that's like measuring the differences between these two distributions. And there's different divergences. The KL is one of them. What other things are in this space that might be just tractable possible distance measures? Well, I mean, these are these are the same dimensionality in all these cases. But another very popular one is the sum squared difference, sum squared error. And then you can have absolute different value difference. So these are called different norms. I think that's the right word. Yes. There's also the Gini index, the Jacquard index, Euclidean distances, like sample to sample distances. So did anyone talk about L0, L1, and L2, et cetera, norms while I'm adding some links? If I have the right numbers here, L2 norm would be the square root of the sum of squares. L1 norm would be the average of absolute value differences. And L0, I don't know what that is. I don't think there's a difference is it? It's like a presence absence. Okay. Just so it's like a that kind of encoding is used in a lot of different algorithms. And then the L2 norm is the sum of squares. So the L2 norm is what is used like in at least squares regression, classical statistics, t test, ANOVA, that's a lot driven by the L2 norm. And all of that is like in the genre of what the A3 notation is describing. But depending on what the B and the C are and what the A is, et cetera, there's just the dimensions of everything and other operations that are happening before or after it. Later on, they mentioned the quadratic, which is the L2 norm. So that's where you have the, you know, the same term as multiplied by itself. Or if you have cross terms, then you get the covariance. And that's going to be used later with the Laplace approximation and other approximation techniques. Any other comments on A3 or dot product? Or just like this, like kind of linear algebra topic, linear algebra, the basics, the trace and the determinant, which are probably less important than the dot product, but still coming to play. The derivatives, how things change with respect to each other, possibly time, possibly some other surface, and then probabilities. So does anyone have any other comments or thoughts on that whole section? Like anything that they read or like had uncertainty around up to equation A22? Great that no one has uncertainty around any of the equations up to A22. Yeah, I mean, obviously, there is a lot to like learn and to understand. And we're not going to, it's just, I don't know, people who want to think about that, like, what are we learning in Appendix A? Why is Appendix A there? That hopefully, we can start to understand some of this in math, learning group, the people who want to be really engaged with the math questioning and process. And then, and also connecting it to computer science into the applications like has been happening. And then hopefully, everyone can benefit from this, like finding the resources and stuff. Because the relationships between the terms, relationships amongst the terms are driven by formalisms. Like the way that preferences and expectations are framed in active inference is related to equations, not only a conceptual linkage. Maybe here we can just write math, learning group, and copy this to the resources. Okay. So what kinds of examples do you think would be useful for learning the equations and concepts in Appendix A? Like if the equations felt too general, and not applying to anything in particular, what example or scenario, or like, would have made sense, or you felt would have made that clearer? Like, we're running a business. And first, this person wants to do this. And then this person wants to do that. Or what example or what would somebody have expected to have found or wanted to have found that would have made it clear? Examples or other exercises? Or a format? This is kind of like a math lecture. It's kind of directional versus like what other things could happen where you would feel like you were understanding and learning this and engaging with it. Yes, thanks, Brock. I was kind of reminded of a video that Carl, a presentation, where he, like in real time demonstrates and helps you work through like a pattern of red, that's like three dots sort of that are moving around red, green and this pattern. And so understanding, like just a simple visual example of how active inference would work, that doesn't, you know, require the formalism to just, you know, surface level kind of pattern recognize. That would be like ideal, maybe, of then bringing in the math for this part and that part and some scaffolding. Yeah, but yeah, I'm not sure the appendix, I'm not sure it was meant to be bred in, I'm not sure there's a great way to read one order or another there, like because you kind of need it, but you kind of, you know, the lot too, right? So yeah, great point. And like the organizing team for the textbook group, which was open to everybody who wanted to join the EDU and the comms weekly meetings. So for future cohorts, people could totally be involved with planning it, communicating it, doing anything for future cohorts, as a co organizer or other role. But we talked a lot about the reading order. Would it make sense to read them in without appendix or putting the appendix last? Or somewhere else? One was short. And hopefully the one week of regime of attention on appendix A and B is like, we're just skimming it in a week. We're not getting a PhD in math. We're not generating these equations on a blank piece of paper. We're just like seeing the forms that will be used. And then some of the key areas in the order that the authors thought that those background topics were important. So like for appendix A, linear algebra, which is what we talked the most about, because it's going to come into play most quickly, especially with these notions of like expectation and differences, which is going to come into play with everything that's going to happen. Taylor series, variational calculus and stochastic dynamics. We talked less about they happened in a later order within the appendix. But hopefully there will still be many questions on them, because they're probably also areas with a lot to learn and to clarify. And also appendix A is like, not the final resource of it diffusing into us. Like something that was really important came up in the math learning group is there's not like a glossary of variables or table of variables. So that's something we can work on with notation is connecting the variables to natural language ontology terms. And then also things like this, what do we even look up? Or a symbol? Or what does it mean when there's a double arrow? There's a lot of things that like a lot of dots that could be connected. If people ask them, and then we can probably get a response like what was the double arrow in a three? What does it mean anything that they're offset? Or what is what are the parts of this shape that are mattering for the background of learning active inference? Anyone can ask like any question if it's coming to mind as an uncertainty, then writing it down just anywhere is really helpful. Because then on the first pass through learning, or a primary branch of learning, we can just go, okay, here's what a one shows. And then have that in a way that reduces uncertainty more rapidly than how any of us would just give an ad hoc explanation. Moritz wrote one thing that I always find useful in thinking about matrix, you know, linear algebra matrices is to write out pictures of the matrices and their dimensions. Because often the notation you've got these i's and j's and k's and stuff going around, but you don't know how that maps out into the actual elements of the matrix. So those illustrations are always helpful that are could be added into this, this book, they're there in a few places, but not that much. There you go, those things like that. And yeah. Yeah. And also, like connecting the programming experience that people might have, even in no code tools like Excel, Google sheet, like you have rows and columns, ones have numbers and ones have letters, but they can be other things. And then anyone who's used something like Python or R, or even just doing some statistical calculations, like if you put two lists of numbers into the t test, that would be vector versus vector something, and then you would address the fifth element of the list. And then if it was in Excel, you would need to address two numbers to say where you were in the matrix, and then the tensor is just any number of more than three dimensions, which is sometimes like, it's like, oh, but how spatially are we going to see it? But then we're familiar with working with spreadsheets and data sets that are larger than two by two or larger than two dimensions. So like you could, and that will come into play a lot. And the numbers can be representing probabilities or be used for probabilities. I also Daniel share in the chat shared a link to a Python notebook that has is a link and it shows kind of the operations on matrices and kind of, you know, you can see both the the mathematical. Yeah, there you go. And so you can see the actual matrix and you can actually run code to there. So that particular awesome. So I pasted the link in here. And then we'll just we'll you can fill out the rest of the row. But then someone could be looking for a guide on matrix or programming and matrix or something before this state. So then each person if they find it and add it here will have a lot to share and learn. Okay, does anybody want to like if they were the one who asked it this question? I think it's probably referring to long equations that probably none of us are ready to give answers to at this point, unless somebody wants to contextualize this question. Okay, I'm hoping a read of chapter four will elucidate when the time comes just exploring the formal foundations here. Does anyone have a favorite or recommended textbook that it contains the KL divergence and Dirichlet distribution in the index? This is for the math learning okay, maybe we could tag some questions to and so then we can know which ones that they can look at Wikipedia is good. The KL divergence is approachable. The Dirichlet is tougher. The links take you to definitions that enable you to put the pieces together. Okay, any comments on KL and Dirichlet from someone who is familiar. Otherwise, like it sounds very technical and we don't need to go into it right now. What is up with dividing by sigma in a 31? Also, detail? We'll just see if there's any questions that are not going to be specific details because math is very hard to like understand sometimes on the fly. And there's definitely a space for the interactive discussions around math that are unrecorded, which are important. However, this is a little bit of a different format. So, which is how I'm hopefully interpreting people's activity rather than their disengagement because they are here and hopefully have at least scanned these chapters. Anyone can raise their hand at any time or add questions or upload things. Mean field, mode, equation nine, equation 16, probability, resource, statistical question. Appendix B, just to see if there is any, also seemed like all details. So, we have 23 minutes. What does anybody want to ask now? Or is there some question here? Like the most upvoted one or some other one that's far less challenging that they would like to address for the next 20 minutes? Did anyone read Appendix A or B visually? Did you just ask if anyone read it? Yes. What would anyone like to share about reading it? I totally read it with a highlighter and made several notes in the margin in my physical book. It was challenging. A lot of the notation I feel like is crazy. Even just in Appendix B, was it the very first thing? Are they using O for observations or are they using O for outcomes? I was totally unclear on that completely. State inference, Markov decision processes. The states that influence outcomes, O. Literally, that's what it says on the top of page 244. It says that the variable O is an outcome. Is an outcome an observation? Are those the same thing? Are they interchangeable? I've always thought O was observations. I put that question in there. It's already there. And then the question right before it, like this is a really tricky section. So the two questions that are in Appendix B that are together, it says the likelihood of observations given a policy is not straightforward to compute. This is because POMDP problem is structured so that policies by influence trajectories indicated by tilde of states S that influence outcomes O without a direct influence of policies on outcomes. The problem then involves a sum over trajectories of states to marginalize these out and find a marginal likelihood of observations given policies. Like what is that these even refer to? If you look in the questions in Appendix B, like they're there in the questions already listed. So I mean, I felt like it was very obscure and not super clear and also like the notation gets crazy. And I'm like really a stickler for like defining every single variable and symbol in an equation to understand the math so that I can actually read it in English. So I was a little bit lost. Totally agree. Thanks for sharing it, Blue. Eric? Just with respect to that particular question. I think that these refers to the trajectories of states. And you can see that because the sum is over the tilde S. So the tilde S is the trajectories of states. So you sum over all the trajectories of states to try to figure out, well, what are we going to get if we apply a given policy? That's my intuition about that. And I suspect also that observations are outcomes, they're sensory outcomes. They're the outcome of a generative model or a generative process, but they're the outcome that is observed. But how many how many dots we need to actually unpack and connect for unsighted, grammatically vague sentences. Right. Like, so thank you, my uncertainty has been reduced substantially just by like figuring that out. But like I felt like that the whole time I was reading the appendix is like, what is even going on here? Yeah. So in you know, the way I knew it is like exercising, you're not going to run a marathon the first time you set out. But you know, the more you stretch and warm up. And that's kind of I think that's how I kind of view this going through the appendix is kind of refreshing on what you used to know about math, at least for me. And so that's like stretching and doing a little bit of exercise the last week. And then every time you go through it, every time you see the notation, you slept on it some more, it becomes more and more familiar. And so you can start to put together bigger, bigger chunks, some vaguely, you can map to the notation and the math other ones, it's still going to be dangling, but that's okay, because there's fewer dangling things that you're groping with at any given moment. So I think it's great to go through these appendices and see it all through one pass. And all right, so I didn't run a marathon, but that's okay, I ran two miles. That's pretty good for for start. Thanks, Eric. Another light metaphor is just like, oh, yes, please, Yaka. Yeah, I was just, I was just gonna come in on one blue set with the kind of some confusion about the notation, because I asked the question about the marginalizing of states, but then I was also confused about the O notation, because in the in equation B, B dot one, it kind of, it seems that it's a trajectory over outcomes, but then there's also an O without a tilde on it. So does that mean that it's an outcome at a particular point in time, but then there's also a bold O, which implies that it's kind of a vector, but if it's a trajectory of outcomes, that also that also seems to be a back, like a trajectory for me implies some kind of vector form, because it's a sequence of outcomes given a policy. But then what does that bold O mean if not trajectories over outcomes? So just to kind of add to the confusion, I guess. Thanks, Yaka. I'm glad like we're rocking. We're on the same page you and I, for sure. It would be awesome for a description or any notes. What is the reading or a reading in language? The this of this conditioned on that is this over that of these two things. The first thing is this, the second one is that. An example of that is in this situation. Here's the script. Here's the graphical abstract. Here's the paper. Here's the person you could ask. Which one of these do you think is going to be interesting? Okay, so Mike asked and I'll copy this in. For those who have read the entire book, did you find the appendices useful as you went along through the chapters? Ali? Yeah, I also think that if these the materials in these two appendices were to be integrated in in the linear narrative of the whole book, it would be much more useful than separating out as separate appendices because as we see, I mean, that's my own experience. Well, reading the chapters independently and in isolation of the narrative doesn't give me a sense of their concrete applications. And I don't know how to use them and how to use these for all of these equations and everything becomes much more fuzzy. I hope this is not controversial that linear reading of the appendix as something that you look through is extremely confusing or extremely imprecise because there's many symbols introduced that might have been introduced earlier, like oh is probably discussed earlier. But then it's briefly just sort of mentioned and then there's a lot of symbology that's not mentioned what it is. So it's hard to read the appendix without prior knowledge, yet ostensibly appendix A is the introduction refresher. So is this the introduction offering? They hope, so they expect and prefer perhaps the appendix to go some way towards remedying it being the maths required to understand this not complicated basis. The multidisciplinary basis means it is often difficult to find resources that bring together the necessary prerequisites. They're going some way. So this is like one kind of plank out there from their point of view from the chapters. This is stuff and I also kind of agree like on what what ordering these are really interesting questions. So then what is the next connector that picks up here with this artifact? Because like someone mentioned how you more it's mentioned you could basically do a whole course on deep learning to apply most of the matrix operations and the whole courses on like the math of them. So they can't go into all this detail they can't spend one hour or like multiple code pages on just what this means. So there has to be some kind of compromise, but then there's not going to be one specific perfect compromise with especially with like length an audience considered. So I think like I mean I undertook like a very detailed linear reading of the book because that's just kind of how I am of the appendixes at least. But I think for many people that like I mean I've been there before like where you're staring at math equations like this is gibberish but like again so just to read the text you know and like to read it and go through it it's helpful to know that it's there. So if nothing else at least like you can be reading the book and then be like oh I remember kind of reading about you know Taylor series expansion in the appendix and so then you can just go back and just having access to it or the the refresh recall access even though it is confusing I agree to try to a linear read of the appendix it's I think skimming over it or deeply reading it and then being able to refer back to it is useful. Thanks Blue. So in next week we're going the next two weeks so the pace of one section per week might be like whatever it was for you however much time you put in etc the coming chapters are going to be very different because one didn't have any formalisms or figures really just the kind of overview figures but we're going to have two weeks for two so no need to rush it read a couple pages and then just go back to the beginning and just restart the same pages like that's like the multiple coats of paint in a mural like the low road to active inference that's where they're going to pick up in that high road low road um dialectic that we talked about last time let's just see like what figures what equations might happen okay a lot of terms that um hopefully are in the ontology already like you can use the at symbol hopefully to call most of them but if something's not there we can add it to like supplemental or entailed okay so what figures and equate figures and equations will we might see here's a box 2.1 about probability here's an equation that's the Bayesian kernel and it's going to be talking about this example of a frog and an apple and jumping or not jumping and that example is going to play a current role but then some likelihoods are shown in a specific example of like frogs and jumping or not and apples there's a work through example of exact Bayesian inference here's a table on statistical distributions it would be really interesting like to hear what support and surprise mean and why what are these are these all the distributions are there other ones or like why these ones or why are they useful where have they been used what does it even mean here's where the KL divergence is introduced and then some more analysis from a surprise perspective Bayesian surprise with the KL divergence there's the box on expectations which is also what we talked about a little bit today and that was like really interesting to connect it to the matrices a figure of the generative process and the generative model is the caption a figure both perception and action minimize discrepancy between model and world variational free energy variational free energy as an upper bound on negative log evidence the figure but with equations in it very common format figure complementary roles of perception and action in the minimization of variational free energy big act imp theme and like something kind of a common frist anism like perception action being in the same game or being in the same service of the same objective function planning but no specific equations but probably citations just introducing expected free energy which yaka and others can probably go into a lot more detail on expected free energy about a future where the outcomes haven't happened expected free energy figure with equations the end of the low road introducing the two key terms variational free energy expected free energy summary what does anybody like want to like add or what was something cool or whether they read it already or not what does anyone Oh Ali yes please well I've noticed the dispersion of some sidebar boxes like box 2.1 or in this chapter I don't know the distinction between the purpose of these boxes and I mean what's the function of these boxes as compared to what we see in the appendices because presumably these boxes cover the concepts that could be skipped over if one is familiar with these mathematical concepts so do you have any idea about the reason behind this decision I mean covering some basic concepts basic mathematical concepts in these sidebar boxes and some others in the appendices that's a great question a box could reflect like okay expectations got it or like okay some product rule all right but it's kind of like you might want to read that or is this introductory highlighting it this is the 101 on probability or is it saying this is a total skippable unit if you want to learn about how the lizard does it here's where you look it can can sometimes mean both and like the format of the book is also relatively austere though in a concise tone like it's in black and white which especially in some later sections make some visualizations where it's like it's hard to understand like where like what curves are doing what it's a black and white image so what is there to see with the color I mean but it also might predispose towards more simple or visually accessible material so what is going to be accessible and the order so those are all really important questions so we can just take notes on it you know in the weeks that we're going to continue to do this because we only have limited live time how about in the last like three minutes what does anybody think about the opening quotations or specifically this quotation my thinking is first and last and always for the sake of my doing William James you know I thought the guy was a philosopher so I guess you just abused me of that one yeah these would be really nice areas to look into for people who like the history part blue and I and others are working on some different kinds of ways to reference papers and just a cool paper sensation perception we have those terms we could just link the paper somewhere so if anyone's like interested in that kind of architectures or that kind of philosophy question those are the literature right now is small enough to know what has and hasn't been done in a lot of areas that are philosophical and applied and technical but just from like the first principles what does this seem to mean uh that shared that other group that that we're kind of both a part of annual um liminal dowel thing that's not really a dowel I don't know what it is but there's this debate for some reason that's going on there about you know different ways of knowing and being and doing and a bunch of stuff that is kind of a mix of philosophy and linguistic fallacies but um like if if thinking is actually a physical process which I think you know I don't think anyone here is going to argue against then it is something necessarily that is being done and so however I guess small you want to draw that Markov blanket up and then however large whether that's your physical body actions or some extended thinking it's literally what you're doing uh they're they're it's not just for the sake of your actions but it's they are your actions like they're just others another set of your actions all right let me I'll offer a different take on it yeah awesome William William James was a great visionary and he foresaw Mark's Mark Zuckerberg's meta metaverse 120 years ago or whatever and he said no I don't want to live just inside my mind inside the metaverse I want to be out in the world interact with real people and real things so he was a meta or facebook skeptic way before his time okay thank you Ali yeah as Brock mentioned I think it relates to an epistemological I mean distinction between propositional knowledge and how to knowledge and I think this statement by this statement here tries to blur this distinction blurs the line between propositional knowledge and know how knowledge because in especially in analytical school of philosophy there's always been a very let's say not heated debate but there's a long-standing debate about about the distinction between these two kinds of knowledge and whether they're in fact they can be distinguished from each other or not thank you could you just unpack what are the two kinds of knowledge again and what are just they referring to sure well about propositional knowledge well an example of propositional knowledge is to know the exact mechanisms of walking I mean which muscles contract and which I mean in what angles the the I mean the whole thing about the biomechanics of walking the whole knowledge about the biomechanics of knowledge can constitute this kind of propositional knowledge and it's totally different from knowing how to walk I mean a three or four year old year old child has a know how knowledge of walking but not necessarily where he or she doesn't know anything about the biomechanics of walking so the biomechanics of walking is the propositional knowledge actually knowing how to walk constitutes the know how of walking so when people say things like active inference is integrating perception cognition and action maybe it is rethinking some of these long-held mental frameworks or distinguishing or operating differently on action and perception like one fascinating recent example from me was and working with Eric was in the area of ant pheromone modeling without going into many details people often modeled preference as a function of the absolute amount of pheromone on trails because that's what the exponential like decay is on that's what can be manipulated rather than the perceived intensity which might have a different scaling relationship so it's like a dim room you can detect a small change bright room you can't detect a small change so that kind of psychophysics of perception gets ignored implicitly because of calls for measurability because the cognitive can't be measured directly even if you potentially had a electrical measurement or something like that happening then these are awesome questions can you think without acting or is thinking in action this active paper which was on a live stream so we can provide the link to it models like perception and action in the sort of like kernel level just the autonomous sensing sentient spot level and then attention and meta cognition are both related as actions to the lower level which is like why the paper is relating computational phenomenology with mental action again just to have like a look at the kinds of models that can happen and then now does making any model fitting any data well enough and saying well we modeled it as action so is thinking in action will that model that says yeah it's consistent with that will that ever constitute positive evidence for saying thinking is action or why even say that or what does it mean to say it and then here's a funny meme here's the generative model and the generative the partially observable Markov decision process here's Carl Friston, Jessica and then anyone hi yes um I guess it's sort of like a beginning understanding of this is that I tend to think that there's like a bias towards action in active inference I think like the first chapter is saying like you know even if you want to like sense or like perceive you have to do some kind of action in order to gain the information and and this like you know quote is basically saying like okay even though the whole field is trying to understand cognition it's like what we decide to do and the thinking that we go through in order to determine our policy is to determine our actions and then it's sort of like feed into um the thinking itself so maybe that's why there's like the bias to action and because like our actions is what it's going to be allowing the thinking but um yeah like when we update our models or like you know trying to come up with like the policies that we're going to be doing like we have to be processing things so you know the thinking is in like as a service to action and that's sort of like what I was thinking I don't know it's a little bit like you know with the chicken and the egg but and you know I kind of tend to think of it is like yes like we have to act in order to understand but and like the understanding which will be coming from the thinking is what determines our next action and yeah that's kind of what I was thinking. Matthew? Thanks Jessica. Yeah I mean it kind of seems to me like we're stuck in a little bit of a linguistic godelian loop of some sort because you know thinking is you know essentially a verb and verbs tend to imply in like a linguistic emergent term actions and so we have to then pop outside of that and try to consider with our internal mental processes how to disentangle these concepts but I see no reason to think that to think ironically that uh internal mental states aren't also conscious parameterizations of processes in the brain in our ability to grasp control of that to some extent and so it seems very strange to me to try to separate those linguistically from our current vantage point even though I understand why historically working only with language it might have made sense so those kind of quotes like James has strike me as somewhat um antiquated or just you know operating on dichotomies that don't seem to make sense given that we've unpacked these processes to the degree we have. Thank you Matthew very deep points like you mentioned thinking as a verb implies action and then just to kind of complete that thought is like the word or the noun form of this one so then what does that imply and then also like Jessica mentioned is like in some extent that like we can only express processes through their discretization of you know via symbols right um the idea that like there's this process uh there's a flow there's something that can't necessarily be separated um obviously from the rest of the flows around it but to reference it uh to point to it to describe it communicate it to another agent we have to uh nounize it um kind of like which is the opposite of verbalizing so to speak. Wow thank you Matthew um again these are yes Eric. Sorry I didn't have anything I didn't do. Okay the hand was yeah so um Carl Friston had an influential 2019 uh paper long paper the free energy principle for a particular physics so that was in 2019 so it's been several years since then and like particular allegedly slash accidentally or intentionally is a pun so it can mean specific like a physics for the specific systems that we're modeling or it could be specific like this is a specific approach. Another interpretation is like it's for particulate entities when we define thing and the partition which is going to co-instantiate like the generative model and the generative process and then the Markov blanket or the interface that's separating them that is separating the figure from ground it's separating like the entity from the niche it's separating what from what what what can it represent the separation of something from all possible separations some separations and then um Matthew you mentioned like yeah verbalizing speech but it feels like we're speaking gnats sometimes yet verbalization is a process especially um dialogue yes Matthew. Can I ask a question that brings in a little bit of content from the other chapters of the book is that yeah sure I'm just kind of curious because when you're talking about those um you're talking about the Markov blankets and integrating it into this idea of how we draw boundaries I'm kind of curious is it fair to say that to the extent we've drawn our boundaries and do see a reduction of free energy in this system it reinforces the idea that there's something to that boundary structure as an entity in the world that there's a reality to that is that a fair interpretation? I think many people would have a lot to say and add this touches on like the question of reification of scientific models and the ability for a identified statistical model to be like transcending itself and used in a realist way like ontologically real about true joints of the world and um I'm gonna go to live stream table and find several of the of those who explained their research on this topic. But even sort of trying to claim it's real like is it just does that ability like once we've drawn a boundary um using these Markov blankets and we do see that it seems to be let's say using information to self-evidence do we think that that's I mean it seems like that the this you know this paradigm of thought is predicated on the idea that to the extent that that occurs and reduces free energy it should attract or further attract attention at the very least for examination or investigation. Yes so great questions. Jakob do you have something to add on that or or is it a slightly different area? I guess I guess it's kind of related um to the to the Markov blanket discussion um no this may this may be completely wrong but it seems to me like with um that this kind of discretization of the Markov blankets forces us to a specific kind of discrete thinking but I feel like that just this model of a of a Markov blanket I think of it as a kind of classical model but I presume that the reality is more something like a Schrodinger Markov blanket where it's not we can't really draw boundaries between thinking and observing a generative process in the same way that we can think of say electrons as balls bouncing off each other but they're not really we can think of them as classical particles and that will help us to an extent but then we can also observe the process of thinking so thinking is also kind of entangled with the generative process so there's so even though the Markov blanket is a useful formalism I think that there will be that there are definitely certain scenarios that in which it's more like a probabilistic well it's already probabilistic but in the sense that the entity that's performing inference also in its own inference loop performs inference on itself and therefore thinking is at the same time an action but at the same time it's part of the generative process okay another angle on this thanks Jacob is what Dean T often talks about with like active inference as a framework or a filter these are just sort of like you know different distinctions that might be transiently useful in some axis like fitting something to a framework it's kind of the pro crusty's bed stretch it out so that this perception and then active as a filter is was reminding me of what Matthew was saying like going out and discovering divergences from expectation of there not being metabolic activity on a planet and then there is some statistical deviation and so like in the classical statistical framework that might be like it's a one sigma difference in the Bayesian framework well there's a Bayes factor of two and here for this evidence and there's other ways to like talk about detection of novel entities that might be part of like a reification process of the extended cognition of the modelers and then like one philosophical angle on that and then also anyone who um wants to then Ali is um this book by Helen Longeno which talks about methodological pluralism and about how there can be like disciplinary rigor and the ways in which often unstated like social priors can be the substance of what becomes understood to be science as like a complex phenomena um Ali and then Brock well adding to what Jacob said and I think the the concept of Markov Lancat emanates from the plutonian way of thinking specifically hylomorphism school of thought as as opposed to hyelozoism well the distinction between hyelomorphism and hyelozoism is that well in hyelomorphism everything is disconnected every concept is disconnected and with every other concept and so there's a discontinuity metaphysical discontinuity between the concepts but in hyelozoism the philosophers advocating this uh school of thought uh well they claim that hyelomorphism fails at answering the question of what it is adequately and so and they even rephrase the the question and they claim that the right question to ask is not what it is it's what it can do so that's basically the distinction between hyelomorphism and hyelozoism but I think the Markov Lancat is a way of formalizing this um hyelomorphic way of thinking at least that's my opinion thanks a lot for that Ali Eric I'm sorry I'm not hearing Eric can others hear Eric maybe I need to reload can you hear okay I'm gonna reload dando can you hear me yeah now I can hear blue okay you're still not hearing Eric though no sorry yeah I just did like a small disrupt I'm gonna okay wait Eric try again okay how about now yeah now it's now I can see you in here again okay how about everyone else who's the audience okay wow okay disrupting the um on the hyelofield it's happened before um I would I just throw in my two cents about um my interpretation of a Markov Lancat it's about trying to make um to perform simplifications that make computation tractable if you have everything interacting with everything else then we can't do computing on that so we um we apply uh compartmentalization and build objects and interfaces for how the objects interact with one another um and those become tractable that's what um you know what a Bayes net does is it says what are independent and what are dependent you know variables on each other and um that works well when those abstractions the objects and relations the now the verbs are a good fit to how the world actually operates so that is a driver for learning or building representations that will use these Markov blankets in and put the compartments where they actually are faithful to the way to the compartmentalization that we can abstract over the way the world parts in the world operate so things that are distant that separate uh that interact only weekly we try to maybe factor those out pretend they don't interact at all or use some other intermediate variables to to represent the interaction in order to simplify things so we can do computation so that's that's how I think Markov Lancats and we'll see if the math tells us that later on when we get to it awesome Brock and then Ali okay I'm like not hearing Brock unfortunately do you yes we all hear Brock Brock is reloading I think now again okay yeah yeah there does this sometimes but I don't know if it's if it's because we have 15 people yeah but it's yeah like Matthew said I don't see I don't hear him it just like there's different drop-offs um I can I can hear you but I still think Daniel can't hear you Daniel you can't go for it okay continue interesting can you hear me yeah oh okay um yeah no this is this is just a song going saying that I don't know we just keep keeps coming up about yeah well okay maybe they exist maybe they don't Markov blankets really but like how would they form in any combat like how would they form an evolving collapse and they're presumably there is some point in in that process this like evolution part where the Markov blanket is extremely poorly defined uh like relative to the system right and that's basically these sort of underexplored areas it was was also in relation to Matthew's question also about um the free energy minimization thing where there's systems like non-equilibrium systems basically where um still something interesting worth you know worthy of study is happening but perhaps is not um free energy minimizing um at that in that state thanks for sharing just a few thoughts like um just because so let's just say some sort of mesoscale or global as that's called but that doesn't mean about the whole world but some sort of global free energy minimization in the joint model is achieved like in a conversation that isn't the same thing as the local disconnected free energy minimization otherwise fitting high parameter models optimally would be simply reducing to fitting single-dimensional models alone if we could just do the linear optimization on the standalone variables then why would we ever need the larger dimensional methods and then um like think of this textbook maybe with where things could be in the coming years though what the the the Markov membrane is like the linear algebra and then Yaakov mentioned that this is classical model it is kind of classical in a sense in the the timeless classic sense so the one layer Markov blanket when thinking about like the rise and fall of civilization is not the end all that's like I think maybe analogous to a linear regression y equals mx plus b and then you have like the height all these hundred years of development on the linear regression model and all these techniques and applications and pipelines so the Markov blanket one layer potentially over interpreting it as already presenting with strengths or weaknesses in certain situations when not being just like empirically demonstrated in the general case we'll see what could be said at when Matthew yeah um along those lines I was kind of curious I've seen that there's been a decent amount of work on the sort of hierarchical or fractal composition of these models I'm curious if that implies that there's a fractality to the boundary by default or if there are like is there any specific work on sort of hierarchical composition of the boundaries of that so-called membrane as you said with better annotation we could have better answers because there's many papers that we've discussed in a guest stream or a paper and there's also like many other papers obviously that that we don't have that kind of annotations of um because it's quite common to see nested models in the context of nesting of cognitive processes like in that sandved smith paper which was I think number 25 that was about cognition as mental action so that was nested and the nesting was interpreted as cognitive actions and counterfactuals basically and then sometimes nesting is used to refer to actual like well the state is inside of the country and then the region is inside of the state like it's a it's implied that the map is mapping on to the territory maybe even spatially the cell is inside of the tissue and that's like answering Schrodinger's question everything happening with the planetary scale analysis cellular level and then this is hopefully what is spoken to with like the composability of active inference and sometimes that's framed in terms of like the the lateral composability of like how you could have three ants interacting or 300 ants and then the the nested composability you could have the fifth and the seventh and all those layers with computational trade-offs maybe with no extra information to be gleaned and then learning the structure of the generative model or just the structure of the partition more broadly and like what the variables are and everything that's the structure learning challenge and cognition as structure learning hashtag synergetics geometry of thought is what first in has and others have raised as like a total open area because there's the parameter fine tuning once you have the base graph so then you're you're you're now inside of the ability to like to reify or not that model you're just doing parameter optimization you know we used two factors in this linear regression and then we optimized it with the l2 norm two is the best number of factors but it couldn't be set at that scale you could say in a two parameter model this is the best parameters with this norm but then you couldn't pull back another level and so that's like meta scientific and meta Bayesian analyses which they're going to talk about like later okay yeah I guess I'm also kind of wondering if there are any examples that come to mind of something like a system that is structured such as the clearest or simplest example that comes to mind is like let's say that you have a nation and a state within that nation and there is a port and there is an overlap of sort of that port interface to an outside structure that is shared by both the national and overlapping national and state interests and decision processes and so I'm kind of curious if there are examples of demonstrating that kind of composability of generative models or if that's something that's still new and open those are awesome questions if people know any some areas to investigate like in the phenomena that you're you know overlapping interests informally there would be like shared regimes of attention there's the question of synchrony and that doesn't mean like synchronization identically but generalized synchrony then there's like coordination of affordances we're not going to do this because of that so that's like um and an area that that Yacob and others have been working on is just describing that situation as an affordance on an affordance or like e sub e like it it's not in the textbook but you know if people are staying this long then this is just kind of some ways that some people are thinking about it and these are the areas where people can also do research and learning with us but like affordances on affordances the safety could be on or off so then there's like an affordance that modifies another affordance and could those have compositionality just like some of these other um generalization affordances for generalization basically what are those and then isn't it like discovering what those are just really I think uh it's kind of also related to that question of this formation evolution collapse of marco blanket this when when does the observation actions of one system map to the to the preferences of another advice versa how does that come into being existence how does that start to happen and how does that unhappen I don't know awesome question thank you and another like angle on that let me just check and then Ali is like this is a graph that we're going to look at a ton and we will interpret what all of it means what is the o a s b pi g etc but this is a basing graph like Eric mentioned and the edges are like dependencies not causal influences in the world but apparent potentially different interpretations is this the only skeleton no this is the y equals mx plus b of skeletons and there's the the linear regression in the first equation in the textbook of the stats textbook and then there's all these accessory tests then there's like creation and destruction as applied maybe there can just be a loop that's not active inference just checking every day if something's within an arbitrary threshold so there could be some liminal or gray area like an interface I think Steven Salat mentioned it as like a nail bed or something like an interface between more particulate and then less particulate more more field versus the particle Ali well as a little side note there's a popular science book coming out I think in June namely the romance of reality by Bobby Azarian which touches on the the emergence of markup blankets the dynamics of the emergence of markup blankets and it attacks attacks it from many different angles evolutionary angle I mean from a cosmic even perspective and it even goes for goes as far as well considering the whole cosmos the whole universe as let's say a kind of meta meta markup blanket so to speak and well I think that it could be an interesting thought put forward by Bobby Azarian in this right thanks Daniel thanks um in fact he gave part one of what was uh intended and may still be like a multiple part discussion but this was him giving that presentation so we were in contact is definitely you know it's an interesting view and it's going to take all kinds of research and and education and communication Jessica yes I mean I guess um when I like first like joined the lab like some of the sort of like um ideas that helped me a little bit like to want like to start even like conceptualizing this uh I mean in addition to the filter that you mentioned before Daniel and like how like it could be like more like porous or a lot like or more hard and allow things in and out but it was also um about like it could be like just grouping all relationships and interaction so like the closer like a relationship like different things are and maybe that's more like on biological systems I don't know like if they're closely related and or they have like more interactions even though they might be different but like grouping it in those sense and maybe that's relates to what Eric said and from the calculation apart and but that was sort of like you know some of the things like where I started connecting on markup blankets to sort of like okay if you know items or like different objects and things like they're closely related you can start like putting on like a blanket around and maybe go so like with idea or wrapping things um from some of the definitions but yeah those are kind of like the kind of visual ideas that I started like helping me a little bit and I still like don't understand probably but those are the things I was like okay so relationships filters um like kind of like connecting interactions and between things awesome thanks a lot for that like interacting entities so you could have the edges representing some interaction like when people are fitting a linear model interacting variables so there's this whole discussion topic of whether the interactions are like the two ants bumping into each other whatever that means in the quote real world and then there's like the statistical interaction so then defining certain variables in statistical models so leaving that debate behind us and just talking about the statistical models that we have Bayesian graphs some variables have edges between them that could be either like designed to there be there or not or you could do some sort of like thresholding approach and explore that your thresholding parameter was acceptable but like the more of a loose interaction you allow the the more challenge there is to fit that statistical model and you may not have enough data to fit like the 10 variable by 10 variable with all the interaction terms which is like why wouldn't doing linear modeling in a health population example they would like do model selection on what their statistical power is with that data set to resolve certain kinds of effects and correlations and like non-sphere cities and that is addressed a lot in the SPM textbook and first and earlier like pre-academic work but then you mentioned like how to like be engaged with that process of like the wrapping or the seeing the clusters and then knowing like where's there gonna be like maybe generative models arising that are consisting of other generative models or other generative processes and then like some things interact more with others statistically in the model as by design or just as an outcome of whatever whatever so there's sparse connectivity amongst the variables they're not like all by all connected and then that sparse connectivity simplifies a lot of things that's also related to like a lasso regression and also to the below l2 norms and then a sparse model can be factorized and that is what allows for the variational Bayesian inference which is like doing Bayesian model fitting on a factorized because it's sparsely connected graph and so a lot of these discussions are quite downstream of a lot of the philosophy of map and territory but it's still super important conversation however within the model inference with a model inference using a model a lot of these questions are are very technical and downstream of importance qualitative things that are also important to keep in mind but of a different type jessica or anyone else yeah wow very interesting here's one other question i guess we can discuss like as the as it currently stands there's a one hour meeting and then at the end of the the very next hour begins the dot tools regular organizational unit meeting so is there i mean it is the people who are here now but hopefully others would be listening to it if they're not able to make this time but like what will help people with the synchronous and asynchronous make the most of the next few months do people appreciate having a longer discussion for this do people think that the main versus the math group like is there another subgroup like a philosophy discussion that people want like kind of really do this because we're not like we don't we're experimenting with open-endedness on our first cohort and in these early phases of the book and people can probably imagine various of like the things we want to balance like respecting everyone's time and different backgrounds respecting their preferences for how much they want to learn about different topics being realistic about how much asynchronous and synchronous direct and peripheral time makes sense but also being realistic like there isn't a two-minute video for a process so how do people think about like that those who have stayed this far like and then anyone else can hear me yes okay yeah i think that as noted earlier in the discussion there are just so many themes that run through this that there are opportunities to pull on any of a number of threads in the course of discussion and so today's discussion was interesting and I stayed for this part of this the discussion simply because of the interesting threads that were being pulled certainly a contrast with the first part of the discussion around math and I think a lot of uncertainty amongst the attendees about how to engage with the mathematical aspects and so maybe to try and put a point on it from my perspective learning about what is active inference and how can active inference be applied in real-world situations is a motivator for me participating um obviously there are also sort of philosophical and maybe more scientific discussions that can take place um and so any any meeting could touch on any of those aspects as well as some of the asynchronous interaction could pick those up as well thanks a lot okay anyone can raise their hand here's just a few other options like if somebody hears something that's interesting to them we can in the discord make a channel that is relate or people can participate in the regular channels just like questions like or and that's going to potentially be seen and interacted with by people more broadly like what if we posted questions that we're having we're in the cohort one of the texts we do and we you know had this question do you know that's one option another um and this is like to Mike's um expression that applied active inference is a motivator epistemic and pragmatic value expected epistemic value we expect to learn a lot by like staying in these sometimes challenging or oblique or whatever discussions but then there's also expected pragmatic value with applying active inference learning and applying active inference so then maybe that is a group we can partition like an applied active inference group so then we can be clear about what the focal artifact is because also we want to like not all groups at all times will be able to have uh nor is it useful to have open-ended discussions of any discursion length so like knowing how far and in what ways and how many minutes of people's linear time um Ali absolutely yes um the edu meetings um just one note on the organizational unit meetings the organization units in the active lab like edu comms and tools for education communication and tools are like directory one hour lab meetings or group meetings because we're not doing the education work necessarily in that one hour but some groups sometimes it's possible to do some stuff but it's like increasingly moving towards sharing updates from people who want to commit in asynchronous work or smaller groups that want to commit to doing something like that so then educational related projects that's their opportunity to ask for help share updates and so on but earlier on there was more topical material but then through particularization and operations and other approaches it becomes more like pragmatic and less discovery and less mixed media role specification all these processes so people sticking around is sort of the and being engaged and like seeing an affordance and then just making that contribution wanting to be a facilitator for a certain project or wanting to contribute actively or just even connect with another participant like you can email them if they've provided their information just some random things but like we want to have the applied angle the philosophical loop and how to even partition that discussion like we couldn't just overgo every math every philosophy every applied question and have the judge and the jury and all this apparatus so how can we scaffold that conversation around applied active inference for the people who are like super excited and motivated mike yeah i just want to add there's an interesting duality related to what you just said in that we can't go into fine detail in all of the content and at the same time i've found this group to be remarkable at unpacking things and sort of really getting into what do we mean when we say that types of discussions and what does this term mean not taking things for your for granted as going through the text and so there's a balance to be struck in taking that approach of asking what do we mean when we're talking about this and putting nouns on things and relating it to language and not going too far over into getting into fine detail about things thank you liall and then i'll leave it to your question yeah this has been a great session really great conversation i really enjoy it this is new much of this most of this is new material for me and so i'm really enjoying the breadth of the conversation and i'm cognizant of this aspect you're digging into is you know how much do we sort of separate out the pieces and go deep dive in different places versus more of a you know touching on multiple threads and their interactions for me while i understand there is a balance to be struck there i'm really enjoying the challenge of relating as an example the math to the philosophy to all these different threads together because they are linked right and and so i do understand that you can't that some of those some of those areas need a separate group that you can drill that into for me personally and so this may be not the same as other people in the group i'm really enjoying understanding the connections the philosophy how these different thought processes came to be through history and get that depth of understanding thank you um lyle and for those who want to like listen or view these live streams have many many themes so check if there's papers that you're interested in here the live streams is about papers so there's 46 papers guest streams are not driven by a paper sometimes the person sharing a paper but these are like presentations hearing from different perspectives and if anybody wants to help organize these that's what we do in comms if they want to facilitate and participate in these discussions if they want to like contact authors recommend papers these are all distributed tasks that are leverage points for people who care to just do a ton of amazing things like if somebody's interested to connect it to a given community and make an artifact or live stream co-organize that like we can catalyze it at the lab scale for individuals who know about the affordance but then want to take that affordance to just have really leveraged impacts in the active ecosystem so all these questions and then jessica sorry i don't have any question you asked about the timetable for the edu meetings like what specific times are they occurring at sure yeah okay they are on mondays at 13 and 23 utc there's two edu meetings to reflect like education being the primary mission of acting flap and then that spreads them out by time zones but if somebody really wants to contribute to an area the organizational unit meeting is not the rate limiting step anyone who has attention to contribute will be able to find a regime of attention that is like connected to a task that's meaningful the rate limiting step is not people's availability for a one hour meeting it's however much time people want to contribute whatever practices and things will find something that works for people who want to be engaged so don't take these meeting times as being like when we're doing it when we're deciding even anybody who wants to can email or contact just the lab email address and be started on figuring it that out jessica um yes this was related to the question about applied projects and i was thinking that maybe one thing that we could do is um on the project ideas and second yeah to have a table and where people can like yeah briefly share what they want to do or like you know just say like okay i'm interested in machine learning and apply you know like active in france and maybe this specific topic who will be interested in discussing this or exploring what to do and and then like adding the names of the people who will be interested in that and so the person who started the you know see that the project then they can be you know can start like maybe contacting those people and see like you know what time they can meet and start like creating their own subgroups um so maybe that's something that we could do like to facilitate that um in a simple way um yeah so just start like saying like this is what i would like to do apply active in france while also studying the course and you know we like to connect with people here who might share the same interest and and you know see what kind of feedback and that person can get thanks and just connecting and building trust and having like a buddy system or small groups or just people who are on the relatively long path to apply active inference on teams we don't have the speed dating hot swap active application protocol but connecting with people and then however it's authentic in that relationship seeing what you're interested in in three months will be in a pretty different situation but we still won't have even gone through the second half of the book on the application of the textbook so there's a there's a lot of time for us to to develop ideas and to connect with each other so thanks for sharing that a lot jessica and and i tagged you so that we can create a table with the right way or do it with however is the right way because it helps enable connections for people who want to connect around applying and also it helps us remember these are the specific reference points in the text that are like our attractor regime of attention for this textbook group like the textbook group isn't all of active flap if people want to apply active inference it doesn't have to be from the ground up but it totally could be but there's many ways to apply that are just in different codos that people get can get involved in immediately so if somebody feels like doing things i hope they feel like they have the agency to do that okay any final thoughts on this interesting semi pattern breaking and also um yes one final thought for me is uh we would have had dot tools organizational unit meeting at this time which is why near the end like i asked if you wanted to do different scheduling because um in general it's probably not good practice or ways of working to like go over time to respect everyone's time and all of these types of things but also in the future we're in gather so people who want to keep discussing the textbook could go there into a different room people who want to do tools can go into another place so we need to figure out how to do that through the people who want to be there like these people and the feedback that everybody has and the ways they want to co-create it tim and then anyone else who has like last thoughts hey yeah i was just going to say this is generally way more interesting than the tools meeting generally is so there's cross patrol this combination shows great potential already i would say um yeah it sounds good i just wanted to add to that conversation though about uh somebody even mentioned play-doh and socrates and all that uh and that whole sort of recursive involution of the of the sort of uh that sort of reification thing you were talking about and and sort of the the the thought processes maybe being almost like a recapitulation of the of the of that of the structure learning and that kind of correct Bayesian factor graphs and all that kind of thing you were talking about there um play-doh had like a concept of knowledge which which he referred to as recollection and and this idea way way back where all all learning and knowledge is actually an act of recollecting what we already know and i just wanted to point out there's kind of a maybe just more of a interesting uh illusion or connection i guess between those two where there seems like a old new what's what's old is new again or what have you covered thanks Tim anyone else who hasn't spoken or who would like to add anything okay