 Hello, everyone. Welcome to Act in Flab. Livestream number 41.2, it's April 13th, 2022. So welcome to the Act in Flab. We're a participatory online lab that is communicating, learning, and practicing applied active inference. You can find us at the links on this slide. This is a recorded and an archived livestream, so please provide us with feedback so that we can be improving on our work. All backgrounds and perspectives are welcome, and we'll be following good video etiquette for Livestream and Dean and Blue. Thanks a ton for joining. Check out activeinference.org to learn more about what the Act in Flab is up to and how you can participate. Today in number 41.2, we are going to have our third discussion on the paper, Extended Active Inference, Constructing Predictive Cognition Beyond Skulls by Axel Constant, Andy Clark, Michael Kerchoff, and Carl Friston. We had a .0 video where we took a first pass, a first coat of paint on this paper, overviewed some of their distinctions and their background and some context, and then we had a fun .1 last week with all of us here, and we went down a few different paths, and all I can say is with trust and glue of all of us here and those who are watching live who are welcome to write questions in the live chat, I think we'll get somewhere pretty cool in the next two hours, but can't even know what shape it will be then. So we can just start by saying hello, and then we can pick up where we've left off, or we can go in any other direction. So I'm Daniel. I'm a researcher in California, and I think the part that I'm looking most forward to exploring today is that trust and glue, and how do we trust and glue appropriately or adaptively? And I'll pass to Dean. Thanks, Daniel. I'm Dean. I'm in British Columbia at the moment. I really like where you and Blue were going with this once I signed off last week, and I was particularly interested in the idea of is the answer in the question, because I thought that was an interesting way of maybe looking at is there an extended active inference component to that. So maybe you'll pick that thread up again today. I'll pass to Blue. Hi, I'm Blue. I'm a researcher in New Mexico, and yeah, trust and glue, or what kind of improvisational context do we use this extended active inference? Like where is this kind of thing necessary, functional? Where do we see it? Like what kinds of environments maybe highlight extended active inference? That'd be interesting for me to explore. Awesome. So we have a few different aspects. We can talk and return to some formalisms about how are the equations connected internally? How are the formalisms connected to one another? How are the extended active inference formalisms framed in this paper connected to earlier or now that some time has passed since the publication? Later formalisms in active inference? How are they connected to the ideas that we've been discussing? We have questions and answers or answers and questions. What does that look like in the context of extended active inference? Perception action, learning, teaching, applying, finding, discovering, a lot of ING verbs and that is hiding or somehow intertwined with this question, which is always in the backdrop or the pre-drop or stage right or something like that with what takes us from a regime of attention or a regime of intention towards active inference itself and some specific model. So where to go? I think the most structured of the options here is to remind ourselves of the formalisms and the figures and then maybe we can come back to any of these as we see. Does that sound good? Okay. Their equation one is a fairly typical restatement of Bayes' theorem. So we won't dwell too much on that but it is relating to the decomposition of a joint model of both observations and inferred latent causes and the total model contains both of those, yet it's able to be decomposed in some amenable ways. Equation two, using variational inference one can invert the likelihood in equation one, the probability of the hidden states conditioned on observation to approximate the posterior probability of causes once a sensation has been sampled. So is this a twist, Dean, or what kind of inversion are we talking about here when we go from thinking about the likelihood which is like the probability of there being a lightning strike conditioned upon what we actually observed that is now flipping to what are the causes given the sensation? Well, here's a question for both of you. Is there such a thing as a group sensation? I mean, if we think of it, I mean it's seldom that we would consider ourselves as a proportion of a larger volume of energy. When we find ourselves in these contexts and in these cultures, so what happens when we invert that idea of I'm one pie in a whole collection of pies at a pie shop to I'm some proportion of a larger energy set and then what are the implications of that in terms of the kind of evidence that I can or cannot gather? So that's my question for both of you because you guys probably can answer that better than I can. So I think we definitely experience group sensation. I think about the countries that are being bombed right now and I think about the countries that when a tsunami hits, there is definitely a group felt sense of terror. That's a group sensation, but also even on a positive note, I think about huge concerts and it's like a bunch of people really having fun and that's kind of another group sensation. Even those kinds of group things or even large meditation retreats is another good example of a group sensation and it does kind of get amplified or it feels like the sensation is going through a megaphone when you know you're experiencing it with 5,000 people or something like that, so in my experience. Does that change the formalism? I think we can identify it, but does that change the math or the way that we measure that based on the source of affective sensing within one agent versus that collective measure? I think it just changes where we put the blanket. Doesn't change the math though? I think it just changes where we put the blanket. So if you have a bigger blanket, there's more components. You would separate the group of people that are experiencing a concert from a group of people that are about to experience a tsunami. There's a huge partition and it's the same as separating two different people mathematically. It's just separating groups of people. As mathematically, we are groups of cells, tissues, organs. I think that the only difference is just what's under that Markov blanket. A few cool points to make. You asked about group sensations. If we were in a unidirectional signal processing paradigm, the question could be approached. What are the outside signals that come in and how do those diffuse through the group? What does that look like when we've taken a step towards thinking about perception as inference, as a generative inference? So what are groups expecting? So what is the same way that we talk about perception, cognition, and affect, and memory, anticipation, planning as inference of entities? What does it mean for the group to not just be passively receiving sensations, but to be engaged in active inference? So that's going to be a big question, and I'll leave it for another brief break to write out a few more options here. But I think what Blue said about, it depends on where the blanket is drawn or how the model is phrased, relates to the composability of active inference and the way that it's kind of like blocks that play nice together laterally, as well as in a nested fashion, including a nice little tuck at the end. So let me explain by what I mean there. The way that they interact laterally is it's a framework that allows you to simulate one ant or many ants, because one can just have more and more entities in the same lateral level or scale that are interacting with the same environment and or with each other, whether that's like direct contact or with communication, because all that one needs to specify is the entity model and then their perception, their umveld basically, and their action selection, like their affordances. So if we know where the blanket is, that was the system of interest, that's the entity, the ant, and then what goes both directions, that allows composability of a thousand ants just like one. But then the key piece with the nesting is even if we only write down a single layer active model. So the mouse in the T-Maze or just a single layer, it's not a deep nested hierarchical model. We can actually still interpret that graphical model as abstracting or receiving other models that we didn't directly formalize. So it was discussed in the textbook and in other sources of like, we can make an active inference model of one brain region with the inputs as sense states and the outputs as action states. And then in that single brain region model, those input states, it can just be like, that's where the black box is. There's just an inability to actually do modeling on the details of what's coming in. We're just going to leave it at some sort of stochastic process coming in. So it kind of like, tucks all the uncertainty away into this hidden little corner and then we're going to interface with that uncertainty as it becomes recognized by the system of interest. Or one could make another model of that second brain region. And it could have its own perception action states, but then that too would have inputs coming from elsewhere and actions going elsewhere. So I think there's probably more to unpack and explore here, but this hopefully speaks to the composability of active inference models to implicitly contain nested subsystems like we could do in active inference model of a person and recognize that they consist of organs and cells and organelles and go all the way down to however small. And we can compose down and dig down or compose up as the extended perspective hopefully will have us look at. But we get to be making our single layer model implicitly within a more composable structure than many other families of modeling might allow for. So that might give an ability to compare alternative ways of looking at multi-scale extended systems, but it will be more like a comparison of relative free energies or relative model fits rather than some sort of ad hoc comparison that we're applying based upon a desire for simplicity or desire to see this many levels. I wonder in the sense that I understand that you can have the single ant and you can find the average in the eco-evo-devo as you pointed out in earlier versions of this set of live streams. And you can also look at the entire colony and look at it and find an average of its behavior over eco-evo and devo as well. But that's why I'm not questioning whether or not if you use one type of math and you just decide which partition you're going to make that that's going to kick out a certain expected kind of average. What I'm kind of curious about is in the synchronicity question, so we all go to the concert, for example, and we all anticipate, because this is the anticipatory part, we all anticipate having a good time, how do we build that into what the actual result is when we all start singing along to the chorus? Because I don't know if there's a math for that quite yet. And that's all I basically ask. I'm not questioning the formalism per se. I'm just saying if we build up, how does that affect what the equations are telling us versus if we, as you said, Daniel, maybe collapse down and what the math tells us there? There's going to be an average that gets kicked out, so I'm not ignoring that. I'm just saying I think there is an anticipatory piece to this when we're talking about extended active inference and what we should be able to anticipate, what that surprise should be, and what ends up happening when those external controls in terms of rules play their part and what each individual's heuristics that they bring to the collective play their part and how that all sort of transpires into something which can be measured and other parts of which may still be way too untractable. That's all. Lou, any thoughts? No, I think, I don't know. I said what I needed to say. I think it scales differently for me than how Dean perceives it, I think. I'm going to bring in another resource here. So this is from the 2021 paper, Felix Scholar, Mark Miller, Roy Salomon and Carl Friston, and it's trust as extended control, human-machine interactions as active inference. So staying on this trust and glue and extended anticipatory theme. So here is one figure that they have. On the left side is a unidimensional scalar perspective on trust, so this is like from zero to ten or one could think that zero is the threshold, pass through zero rather than passing through five. Do not collect $200 on your way through five, but you might on the way through zero. And we have negative trust as well as like positive, complete trust with various associations we can make like having aligned interest or shared affection. They then expand that unidimensional perspective into like a two-dimensional perspective on trust. And they separate out the x-axis and the y-axis as being trustworthiness and trust. And we've been talking about glue and trust. So let's keep the glue and trust rattling around as we think about this later work, which undoubtedly is building upon the current paper we're discussing. And so on the y equals x line where the trust and the trustworthiness evaluations are equivalent, that is where you have justified trust, meaning like high sense of trust, felt sense of trust in a trustworthy system. I believe the glue is strong and the glue is manifestly strong. I believe the glue is weak and it's breaking all the time. So that's on the y equals x line and that's like a manifold, which is to say a lower dimensional line in a higher dimensional state space that actually is this unidimensional one. So we can imagine that being at an angle and being the y equals x line. What is opening it up into multiple dimensions allow us to talk about? Well, it's allowing other quadrants to be explored where the perceived sense of trust may differ from the objective or the relational trust. And so it's sort of like separating out cognitive versus the behavioral dimensions of trust. Like would you rather be the person that people behaviorally trust? Like they give their keys to you, but then they're always having the sentiment or having the communications that that person's untrustworthy or would you rather have the lip service or the cognitive lip service of people thinking and feeling and maybe even speaking like they trust you. But then when it comes down to it, there's no actual behavioral trust there. So they explored that in the context of human robotic interactions and technology. So then let's just look at the second one and then it'll be awesome to hear what you two think or anyone in the chat. Here they're looking at generations of technology along with their adoption curves and the ways in which they enable empowerment. And so the three rows here are like the sort of mobility genre of the legs embodied, moving into the extended and then cultured with like the horses in the cars. There's like a weaponry genre or a sport hunting for those who want to. And then there's a navigation orientation which even came up in our discussion last time about cloud-based map services and how that leads to changes like in our spatial navigation and especially as we'll be exploring in 42 upcoming, what if the neurobiology of navigation is deeply intrinsic to ourselves. So what is actually happening to ourselves as the navigational technology becomes something that we trust, something that we glue to. And so here they're just showing that there's this spiral of increasing empowerment like an individual who's in the V3 tech stack here has certain constraints and affordances and just simply differences not even better except on specifically defined axes than those in the V1 tech stack. So where do some of these representations or points that are raised in the later solar paper speak to either of you? Do you have something that's not blue? So I'm not like really getting this empowerment time adoption like without reading the paper it's just like not clicking free. So I have to I guess catch up in this trust versus extended control. Yes, totally agree and definitely the real time type interpretation is non-trivial. So let's just look at what they defined there to measure the amount of control or influence an agent has and perceives. Clubin at all proposed the concept of empowerment. Empowerment is a property of self-organized adaptive systems and is a function of the agent perception action loop more specifically the relation between sensors and actuators of the organism as induced by interactions between the environment and the agent's morphology. So this goes back to like the definition of agency and we were talking about empowerment as like a definable measurable metric on a live stream recently. You remember that now it's starting to kind of click in but we were talking about that. Like the capacity of an entity to realize their preferences like you can have all the preferences in the world on a given on the temperature of that day but on that dimension one's empowerment like their capacity for action selection to have impact a causal impact on what is being preferred that is minimal versus one could be empowered and not even care and they could have the power to modify something and yet not approach that potentially seriously. Yeah, so it's clicking now it's making a little bit more sense but I'm still not really getting this graph. I understand like empowerment over time I guess. I mean like if you think about a graph of a human we become more empowered. We're like a inverted parabola right? So like we become more empowered as time goes on and then less empowered as we age and become less able to care for ourselves. We're unable at the beginning and then we max empowerment and then like least empowerment towards the end of our life or if we live to be 110 or whatever. Not always true. The end of life gets a little weird. It's a wiggly trail offline. That reminds me of the classic riddle what has four legs in the morning two legs in the midday three legs at night. It's a person. Yeah. And then also let's look at what they mean by these like flecks because there's a few other pieces of course that this paper is bringing in and then we'll kind of return to the trust and glue of the constant at all. So of course it's a simplification. The important idea here is that the inflection point flecks indicating the start of technological decay reflects the abandon rate of a practice as the experience of better predicted slopes of extended technological engagement lead to disengagement of non extended approaches. Cars replace horses replacing legs. Old slopes are less than expected and so unsatisfactory as compared to new ones. So it's a technology adoption model and often we just see the S curve part of the adoption and people will say like well here's how fast telegraph was adopted then the radio then the TV then the internet and so one can look at these S curves of adoption and sometimes they like start low they go up and then they flatten out and then it's sort of like the technologies at 100% adoption this is also showing not just the rise of empire but the decline where we're seeing the retraction of use from a given technology and isn't it funny that we may have actually you know used our legs less as other technologies come into play like just the steps per day. So we have each of these technologies stacks that they're modeling here have like a curve of adoption and also a curve by which they're no longer in use with many simplifications as people can hopefully charitably interpret and what they're pointing to is actually that the inflection point of the next generation when things are like when it's moving as fast adoption is increasing as fast as it's going to increase for this new tech occurs basically it must be occurring during the decline the peak or the decline phase of the previous generation of glue because unless the technologies can be coexisting in use but they're looking a little bit more like technologies that replace one another so it's interesting here because legs are actually glued to you whereas like horses can wander away or be stolen so there's like an intermediate level of glue but like cars are like you know horses are like the most likely to go away and then cars are you know not they have no agency themselves so they can kind of be glued a little bit more tightly so horses are like the least amount of glue and cars are like the most amount of glue it's interesting. Yes like, yes Dean. So I have relatives who have a ranch in the wine growing part of British Columbia and in a strange conversation I asked my relative who has a number of horses would he ever consider instead of people being cartered around in minibuses from winery to winery on a wine tour given the proximity to a number of vineyards would he ever consider taking his horses and putting a business plan together around having people go from vineyard to vineyard on horseback and he laughed me off in a heartbeat because he said no I think that would be very dangerous drinking too much wine and trying to stay on their horse but then he came back to me later on and said maybe I was too dismissive of that because of the novelty aspect of it and if you could actually control the nature because he's got great horses I've ridden on his horses a lot but if you could actually control that situation the novelty of that means that the people that would participate in that would actually be doing something very unique and very novel and so that reversion back to something that's not necessarily seen as being something super efficient is actually something that can be something that's very attractive and extend that generative model we haven't followed through on that that was mid-pandemic but just this conversation right here I find it really I find it kind of disturbing because to have all of that now come rushing back a year and five months later kind of tells me what's going on here but it's that sense that in the collective extended active inference in that probability of what portion of a population would actually find this interesting instead of just dismissing it out of hand and saying that would be a logistical nightmare you incorporate Bayes right you actually see as well maybe there is a percentage of the population that would actually see this as something really unique as long as it's not something that's unmanageable right so I had no idea that you were going to bring this up I'd have no absolute no idea what the comparison is of empowerment to the adoption uptake of that particular offering I find it really curious right now that it here it comes again it's popping up again and it's not a constructivist thing so much as it's holding open the idea that there is not necessarily a bunch of people that just want to be housed in a minivan what's the possibility then so it's actually pretty interesting in terms of this extended active inference right it's not optimization in the conventional sense of well we can now hit Nine Wine a reason for hours maybe it's an all-day thing and people actually tell all their friends what a great experience they had so you look at the model a little your generative model is inverted I hate to break it to you Dean but the idea is not that novel well there you go where I live I live like really pretty far out in the middle of nowhere and there's one bar like within maybe a 25-mile radius and people ride their horse to the bar and there's hitches for the horse where you can tie the horses outside I drove Daniel up that road so he knows he's driven right past it so there's hitches where you can tie your horse and go to the bar and drink and people actually have been cited for DWI on horseback after leaving the bar but also that the Cinque Terre is amazing a little series of five towns in Italy and there's no cars I mean they say that no cars there are these tiny little baby like trash trucks and like FedEx delivery trucks but like there's no roads that a car could actually drive on just these little tiny like car like things but you have to arrive by boat or by train and people take horseback from you know through these five little towns they do horse tours and wine it's a five little towns that what they do is make wine so like that's definitely like a winery tour and there's a beautiful beach and yeah it's really kind of nice so maybe novel in the area and there's some value always to that but yeah had to bring up the closest bar to my house where there's horse hitches actually at the family dollar which is like either horse to like the handicapped parking signs in the front of the store classic two things one shout out for the gelato and Rio Maggiore and two I could just see because I've been to the to the well I haven't been in the French laundry but that famous restaurant in Napa that's up from that Napa bit but but I never saw like a dozen people riding by on their horses but I could just imagine for the people who are going there that wouldn't seem out of place so yeah two shout outs here's an interesting pattern so let's review figure one and two just briefly because that's one of the key contributions it's also some of the formalism so we are taking it fast but then we'll take it slow in figure one we saw the standard action perception loop that we've seen in different guises where we have perception and cognition as being a process of optimization and that is what the variational approach enables so that's what they write here variational inference converts an inference problem into an optimization problem articulated by equation three so that's where we went from the Bayesian decomposition into this variational approach variational because there's this q distribution that we control and we're minimizing a divergence between a distribution that we're in control of that's simplified from a family that's very tractable to study from this true posterior which is inferred and then we go from that variational phrasing of this inversion to an optimization that helps us solve it that's what we're seeing in figure one there's optimization happening at the cognitive part of the partition and at the action part the sensory inputs are not being chosen by the entity directly they can engage in active inference to take action selection that then brings predictability and control to sensations perceptual control theory however those are not directly being controlled or optimized so that was figure one and we talked about how on the other side of the table there still is this stochastic dynamical equation reflecting how external states depend on their prior states and on action so we still have this interactive external states with potentially complex and self-reinforcing dynamics that's emitting sense states and so it goes in figure two that gets brought into the extended active inference space because now on the other side of the table from our star system of interest the brain here is also something looking more like an argument of previous external states and action selection rather than looking more like a stochastic differential equation so now we have extended active inference because there's like an adaptive cognitive process happening on both sides of the blanket and that provides this new interpretation of what is crossing the boundary as being joint self-evidencing and part of a joint optimization process even if these two different arguments are quite different from each other that's also where figure four came into play the same way that figure one developed into figure two from the mere active inference on the other side of the table to the adaptive on the other side of the table in equation two we were looking at the variational optimizable approximation of figure one and in equation four we're looking at the variational optimizable equation describing basically figure two let's return to our discussion on trust and control and glue and tech and all of this even before there was spears if we were going to have v zero we could have legs hands and some embodied navigational on board perspective so that's the pure bio where stack v zero and then we can think about and it's not so clean across these different verticals of technology but the horse for sure is like an adaptive active entity like there is an intelligence on the other side of the table when somebody is riding a horse what's interesting is that cars maybe complex and however simple early cars were more like a mere active inference entity I'm sure they still brought surprise and they broke and did things that were like wacky and stuff but it was kind of like operating a motor now cars and technologies are actually becoming increasingly um adaptive on the other side so let's replace horses in v2 what if it was a map like a paper map a bow and arrow and a skateboard so those are all non-living extended entities and those are one stack and then the next level is like the technology is developing but it still is um like that offloading rather than the uploading you can't punch the person from 500 feet but you can use the crossbow to do basically the same transfer of kinetic energy and with digital technology and digital models of cognition we're moving into a place where like the car, the gun the GPS system like they might just say you can't do that so that is changing the nature of interaction with technology whereas like a trusty bow was one that can be relied upon to behave predictably in a certain way what is trust and glue when there are when we're in the figure two worlds with everything here's the bow and skateboard worlds in figure one and now this is in like a adaptive active setting but yeah, blue so I'm really reminded of um the pay phones and like I know you know Dean is old enough to remember but like Daniel what were you like five when they had pay phones I'm just kidding so I mean we you know so we had a telephone we had a telephone at home right like we could trust it that if we were at home we could connect with whoever we would want to connect with and then you know there was all these pay phone technologies like if you were somewhere where there were people where there was a gas station you could always find a pay phone and it's interesting like the you know so we were okay being away from our home phone but we had some trust that somewhere we could find a phone to connect with people if we needed to right but we still couldn't connect with them unless they were at their house there was this other weird like looped component and then you know you don't see a pay phone now at all like the last time I saw when I took a picture of it because it's like such an anomalous thing I mean they just don't exist anymore because there's some you know everyone trusts that they'll have a phone and yeah if your phone battery dies it's okay you trust that you'll be able to charge it you know in your car or you get a battery pack or you have a solar battery pack I have one of those it's great but so you trust that you'll be able to connect even if you don't have your phone someone will have a phone right so like you know if you're in a car accident have no cell phone someone will stop and you can use their phone so there's always this but it's interesting this level of you know trust in the ability to communicate with others and then what does that do to us cognitively like if we you know can you check the fridge hun did I leave do we need milk like you know can you call your house and find out like it's interesting so I wonder how much more preparation we used to do without this ability to communicate broadly like at the push of a button or you know 10 buttons or whatever I just build on that for a second so a couple things and if we're gonna if we're gonna do it around the world thing there are places still in the world where although there's a whole bunch of people that have got their own device there's still lots and lots of pay phones right so we we don't see them necessarily in our cognitive niche but I can think I can think of Central American countries that I visited recently and I'm gonna be visiting later this year where if there wasn't a pay phone on every second corner that will have changed only since 2017 right so some cognitive niche is still require that in this diagram here I think it's interesting that when we look at the verticals we could probably see the offloading the idea that why would you need a pay phone anymore that being sort of resolved depending on which niche you're in but the horizontals which is really the diachronic part that's what kind of frees us up to go and look at a turntable again even though most people have got that same capacity to access whatever the media is on their phone right so I think again I'm gonna beat the same drum when we look at these kind of representations it's a minimum of two do we look at the verticals and the horizontals do we see the stacks and the diachronics because when we incorporate the diachronics then that looking back I still probably won't go to a phone booth in Costa Rica because I just don't need to but I can certainly understand why it still exists and it isn't just a novelty it exists like you were talking about last live stream about why we've automated water access it's still those people in those cognitive niches are still finding those tools useful so yeah I think it's again I keep bragging about this actual constant character who writes these papers but man is he ever touching on important stuff in terms of us sort of being aware not just in terms of what we're dealing with in the here and now but how that environment necessarily partitions us as much as we actively partition it and so yeah I mean go to Costa Rica it's pretty it's pretty I wouldn't pretty modern place but they still have lots and lots of pay phones and they don't really matter they're in that second hump as we're transitioning from house phone to pay phone to cell phone they're still in v2 there or yeah some point where there's overlapping waves of tech so very interesting about trust and glue at different levels or types like do you trust that you'll be able to simply make the call or do you trust you'll be able to hear you on the call do they do you have trust that you'll be understood do you have trust that that will have action like if you call an emergency hotline number that's like it's probably a bunch of active jokes that one can imagine because one is providing a sense stimuli for that operator so that they undertake some action and you trust that a certain kind of action will be taken though not directly knowing all the details like we may not understand how an ambulance gets called or where the location of the ambulance are but then we expect that upon a certain type of signal a call that one will be arriving shortly so that is very interesting and then also Dean like you said we're partitioning technology like if you install Linux on a computer and it's the disk partition and there's so many times where technology is about partitioning and about putting the technology in the box and technology is often in a box it's like wrapped in a box and then it's in like another plastic box or metal box but then that actually changes our cognitive partitioning and then we've explored in this paper a sort of low bar and then a high bar where more is different and the low bar was the off-loading was where the technology is going to merely remind you of something and so the off-loading can be freeing up cognitive thought cycles and allowing for future epistemic action writing something down on the post it and putting it on the doorway and it says get milk or it's yellow and you know that yellow means get milk but you're off-loading it to this sort of relatively passive more like a crossbow more like a skateboard kind of technology what starts to happen when technology becomes adaptive and anticipatory and so we are not just off-loading that's the low bar but we're off-loading and there's an active entity on both sides of the table now and they're anticipating and expecting each other what is happening when our technology use and modifications in the niche change our cognitive partitioning I got a question maybe for both of you so in that under that sort of the spirit of what you're just saying Daniel so you get some platforms where you want to do some you want to take all your photos and all your images from your holiday and you make an iMovie or you go to Final Cut Pro now both of those platforms perform essentially the same function in terms of you being able to stitch together a bunch of images but the iMovie in its effort to be adaptive is how much of this would you like me to do would you like me to sync the music for you would you like me to apply the Ken Burns effect whereas the Final Cut Pro the assumption is that you have a much a much more control aspect of this in terms of what you want to be able to fine tune and continue to operate I think it's interesting the technology question and the environment question because yes it can be framed as freeing up but I think at some time at some point we have to ask whether it's option removing because it's trying to help us but it's not necessarily generating more possibilities which is again another form of optimization optimization isn't always removing uncertainty lots and thoughts times optimization is having more possibilities that are valid that are viable that's the harder of those two things to determine what's a viable alternative here as opposed to am I getting closer and closer to being able to regenerate the copy it makes me think about optimization when we're in the world of figure one and when we're in the world of figure two so human on the bike and optimizing has to do with speed on the bike but now we're in figure two world and we're all going on a walk together like the three of us what is the optimum speed it may have some of the same considerations as the bicycle like we want to get there, we want to be safe we don't want to break anything but then there is this relational layer where the optimum speed may be a pure group decision and then what you said about optimization how is optimization related to reducing uncertainty as well as or in what cases reducing options which can be a one way to reduce uncertainty you have equal uncertainty about 100 options let's just reduce it to two you're still uncertain but there's only two options but then you also mentioned adding more options and of what kind so let's think about choosing a serial so there's 100 serials there in the store and we're in the figure one world it's just the dead serials and they are passive recipients of our action selection maybe they're wiggling around a little bit on the shelves trying to escape but basically they're dead and we're taking adaptive decisions and so there can be uncertainty about which actions to take yes we could increase our precision by having for example fewer alternatives but then again you said what about optimization via more acceptable alternatives what if there was a friend or a parent or an active computer system that said would you like me to make some suggestions that's actually another option of a totally different type than serial one through 100 so you've added another option another affordance for action and in doing so it's part of a trajectory that does minimize free energy and find some sort of satisfying solution like can I ask a question it's actually an opportunity for an open-endedness it puts a new affordance on the table when the adaptive entity asks like okay could I make a suggestion about your serial choice yes are you interested in this or that it's actually adding more uncertainty about potentially even a novel introduction of now there's this dialogue it's like wait I thought we were choosing a serial why are we now engaging in dialogue but then it could be on the path to selecting a good serial can I just quote something you just reminded me of something in the paper under the diachronic cognition VAI in the 314 I'll just read it because I think it's spot on to what you were just describing cognitive assemblies are formed and maintained diachronically beyond the local organisms centered boundaries of individuals Kerchoff, Malaforis and Stotz cognitive assemblies are decentralized systems or networks of human and non-human agencies which is what you were just describing the tour in 1993 whose causal constitutive depends upon self-organized processes distributed across the network that they constitute so yeah I mean I think the authors would agree with what you were just describing I remember reading that the first time in thinking to myself again a Paul Kahn's book the origins of order so you have assemblies within and then you have assemblies in terms of a larger energy package a constitution and so how does that ebb and flow relative to your own homeostatic and allostatic heuristics that you're following right so but that's again back to the witness piece right I don't know if this is exactly how to say it but I see this paper and research and thinking as being sort of the upswing of thinking about 4e cognition extended enacted embedded etc etc etc what does it look like to be on the upswing of taking 4e cognition as a starting point so no longer does non 4e have to be straw personed or discussed pejoratively like car companies today are not doing slam pieces on horses there's a recognized place and time for people who are horses and it's now in a different tech vertical than people who are now doing the differentiation between different cars that they might like to buy when a technology actually does threaten it maybe you don't need to own a car maybe there's going to be something you can trust on that will take you where you need to go that is where you start to see the threatening and the responses and things like that when there actually is a possibility to have a technological replacement but there isn't a attack on non extended non embedded non encultured cognition the paper does not spend a massive amount of time describing that cultural priors influence cognitive process of individuals it indescriptive indescriptive and concise positive language it is recouping and restating an extended perspective and taking that as a starting point for moving forward and for modeling kinds of systems that previously had ineffective or incomplete models because hardly could it be said that this paper answers anything about extended cognitive process in fact we keep on coming back to the figure one figure two equation two equation four step because it's subtle but it is the contribution of this paper because they took active inference which was always framed this way extended it and showed well actually like this is a special case figure one of active inference was a special case of where there could be a more adaptive entity on the other side and then that becomes reabsorbed into active inference and is a starting point for what I hope that we'll continue to discuss in some of the rest of this discussion what is active active inference and extended active inference particularly as a starting point look like rather than well we've done all of these attack ads versus earlier models of cybernetics or of ecological psychology we've decimated the opposition and now the only model that you can go forward with is active or is extended cognition people who don't believe in extended cognition just what are they thinking about like that isn't where our regime of attention or linguistics is let the record show so what does it look like when we actually do start to take steps forward in this figure two kind of world Dean can I ask a question on that then would it not make sense to at least momentarily suspend many of the typical hierarchies that we apply we can keep things nested those things continue to exist but isn't extended active inference asking us to start from a place not knowing I mentioned this in the last live stream I mean if both people are willing to take that risk and face down that consequence or if the entire audience has taken on that risk and taken that that question on collectively is that not the beginning of extended active inference where now the environment access the scrutinizer as opposed to oh there's somebody who now judges all people who don't believe in active in active extended active inference as being not legitimate like the first thing that we have to do is allow the environment to be the scrutinizer as opposed to somebody else who's given that role and I'm sincerely asking that question I'm not questioning whether that's even possible but if we're going to start down that road of applying extended active inference then who who's the judge here and for how long if it's extended active inference as opposed to mere active inference Lou you have a thought or I'll go ok it makes me look in the bottom right corner of our video for act infer serve as we sometimes do and before the action potentially there's like this stop, wait, listen, perceive that is silence important to also recognize like a silence before and after a word so where the action perception loop begins and ends no one else but you said what happens when we start with not knowing what if we start or intend for the environment to be the scrutinizer how do we let the environment be the scrutinizer whether it's short long transient we have to act because hashtag figure two our actions are the observations of the environment so if we withhold action or act in a way that the scrutinizer is going to get the wrong impressions from like were the adversarial demon it's like oh I prefer lower co2 but then Tee hee hee I'm just putting a ton out what could one ask the environment to make its inferences on only the Markov blanket only the holograph only the quantum experiment and so action is how we allow the environment to make inferences like two people I don't know operating on a saw or a tree one person's pushing while the other one is pulling what are the consequences of our actions we don't know but we have expectations preferences and we reduce our uncertainty about all of that but when we don't know which is to say our cognitive process doesn't have precision and precision could also be a false belief so we have not knowing in our extended active in our cognitive process do we infer first and wait until we have higher precision which again can be a false perception and then that is actually implicitly putting one into the position of the central scrutinizer it's like saying when I'm ready to act then I'll act but I'm going to wait until that these are all of course simplifications I know there's counter examples that people can think of and that's rich and those counter examples are in a yes and spirit because these are only some of the actions that I or you could take but when we empower the environment that doesn't mean that we don't act it doesn't not mean we interact more and we get out more often isn't that the first isn't that the first assumption if we actually want the environment to become more involved in whatever it is whatever the joint self evidencing is don't we have to actually allow the environment in by getting out more and interacting more Lou so I kind of find this like if a tree falls in the forest and no one heard it did the tree really fall so if there's no action and I guess a tree falling down is also a tree doing action maybe whether it's you're active in France or not doesn't really matter but I do think that you know something has to happen for the environment to perceive it otherwise what is the environment receiving I think we're navigating on it but I think it's harder for us to perceive in our minds how it might be navigating on us we pick ourselves up put ourselves on an airplane and fly somewhere so that's pretty easy to trap but I don't know that we necessarily know until we bump into that payphone in that foreign niche how that's going to necessarily navigate on us we don't know until we actually put ourselves in that position and then decide what that interaction will or will not be do I take a picture of it because whoa I just found something I haven't seen in a long time or do I actually pick it up and do some and have its affordances play on me that's the harder part to get your mind around it seems like it's imaginal and philosophical but it's really navigational and wayfinding and reorienting updating updating updating so can I just respond really quick I think that you are dealing with or thinking about when you're thinking about cognitive uploading and this has been consistent through this stream and the last stream you're talking about putting yourself in a foreign environment so I would have to act by leaving my office moving on a plane arriving in Switzerland and getting off the plane and now I'm in a foreign environment and so what do I perceive in this foreign or in this novel environment but simply the act of moving from a familiar environment to a novel environment is also an action right completely it's an epistemic forging action to be in a new informational niche to jump into a new or to listen to a live stream that somebody might not otherwise take a little journey with us and it is definitely one of the most important kinds of action to study just like they bring up epistemic actions at the end of this paper knowing where to look what is on page 50 of this book one has to have precision in an action it's a policy to know where to look that's an epistemic action and Dean you've brought together the spatial navigation elements with the epistemic aspect and then even in that example of the pay phone we see epistemic and pragmatic value like if it's high novelty high epistemic value that might induce a certain kind of action like taking a picture maybe to document it with the epistemic of the phone like using it to make a call so this type of action to have the initiative just to get up and to do something and to have bounded modeling and just knowing that one will know enough when they're in that situation or that it's low stakes enough that it doesn't really matter that's very interesting so let me read something that is in live chat so Steven wrote this paper extends towards the mind body spirit environments dynamic by starting in terms of what most active models focus on generative models how can we start bigger, bolder and work back in so I'll just give a first thought thanks for the question Stephen so I when Steven is saying where most active models focus on or start on which is like the generative model of the particular entity and the particular entity the thing which is the thing as distinguished from the environment is this that is the blanket states and the cognitive states blanket and internal states constitute the thing like the car the car of course here we would take the car as process and the car includes the roads and the stop signs but the car as a thing partitioned from its environment is the blanket and internal states of the car and so extended the idea that this is the thing but that it's extending its function in a more distributed way in a way as I think we may have even talked about last week reifies that as the thing like when the person puts the post-it note and we call that extended organismal cognition do we not reify and maybe even implicitly constrain what organismal cognition is because we're saying well an organism with a post-it note is extended organismal cognition so that means that regular organismal cognition isn't that so to the figure two world we can then ask about well there's now a generative model on both sides that's what we were looking at with our colorful boxes last time each one of these boxes is a thing and then we talked about how like they make up a bigger thing together they make up a different generative model together so this thing the green thing on the right is still a generative model so it's not like we're going to escape active highlighting of generative model but rather we recognize that generative models can have sub partitionings and that some of the cases that we might want to think about involve like a partitioning between what are classically understood to be autonomous entities and their niche and so it's almost like from the atomic active not the quantum just like sort of the single entity where we're going to put less emphasis on the niche then with extended active we start to recognize the niche as an active entity not of the same type but of the same structure in terms of performing optimization on both sides and then that really moves us to a place of like synthesis which is it's just like holistic active or it's just it is extended active inference but we're not saying who it's extended from it's just we're taking a total modeling perspective on all these activities as well as all of these activities now with the red line but then what does that make the purple line and what is it's niche and that comes back to the point about composability which is like we could stop with just the black box and say we're just modeling the entity and we're not going to think about the environment as a thing we're going to think of it as the heat sink just like our quantum colleagues reminded us or maybe on both sides of the table as things but then in doing so we make them into this joint generative model so again we didn't escape generative models we actually only reified the need for them because now we've made this fusion generative model but now we're back to where we were earlier which is implicitly taking the environment to be the heat sink blue I have a thought if you could go back to figure T that would be great just about what happens and this goes back to what Dean was talking about earlier when we were talking about scaling and where do you put the Markov blanket so here we have the niche and the environment doing this joint self-evidencing and what happens when like they're trying to fit to each other right but I mean we all have seen circumstances where someone doesn't quite fit in with a crowd this is how people become school shooters or radicalized or bullied or we've all seen someone doesn't matter they see their niche environment they just can't quite fit like a square peg trying to fit into a round hole and this happens like I just was writing about this last night when we if we were to take an organism and put it in a novel environment I was suggesting earlier but like I mean take a bacterium that lives in a hot springs and put it in a cold water pond it's not going to do very well because it's just that's the wrong place for it to be and I just was really talking with Chris Fields yesterday about what makes you put a Markov blanket in a certain place or what makes things and this is like one of the fundamental the big question it is the question where do you put the Markov blanket like where you can establish conditional independence okay so like I get that answer but at the same time you can kind of put it wherever you want it so when there's maladaptive fit to the niche environment does that make it pop out of the Markov blanket and what is the threshold for that maladaptive fit these are just the things that I want to know in the active textbook and in other sources there's a discussion of Bayesian brain and of computational approaches to neurophysiology neurodiversity psychopathology all of these different areas and by integrating them we kind of see that just like axels bacteria can be doing exact bays or variational bays free energy and so if we have an optimization right off the cliff that hot water bacteria and the cold water is going to continue minimizing its free energy up until it dies and so if systems are always doing free energy dissents where do we get to say which kinds that we prefer like maybe we need to bring on the other side of the purple line is like the modeler. Like we've still been thinking about this in a very like entity, it's physical environment partitioning. But this comes back to what happens before the graphical model gets drawn. That's the first Markov blanket where now you in that paper or you in that graphical representation, now there's a blanket between the two of you. But it's almost like there's this modeling impulse that is not extended. It's within a cognitive entity, whether that's a person bouncing around their head, whether it's a group that hasn't externalized the model, but they're bouncing around in each other. And then when they externalize it, that when they externalize this as a model, that is an action. It's not the only kind of action to take, but then that is the zero to one step. Yes, Dean. And to both of your comments, one of the things in our program that we used to talk about all the time is, you don't want your sponsor, the person who is essentially taking you and inviting you into their professional setting, their professional cognitive niche. We don't want you to turn that person into teacher 2.0. We don't want you to have the same expectations of that person as you have of the person that's in the classroom. One of the things that you have to try to do is be adaptive and extend what you then can accommodate in terms of these various formalized cognitive niches that you're now gaining access to. So I just think that that joint self-evidence in part is really, really critical because I think for most of us, it's hard for us to get our heads around the idea that, well, I have a certain agency and then I have other people who have leverage over me and they can tell me what I can and cannot do versus the person who goes into a situation and says, I'm a proportion of a larger energy situation, set, volume, whatever you want to call it. I'm gonna, and somebody who's been able to find another person who, despite the fact that they have a lot more experience than me, is also able to perceive themselves as a proportion and is prepared to base update. And now, giving that relational composition all the support and resources that you possibly can so that you're actually generating this possibility of being able to invent together as opposed to one person's downloading to another, which I've found traditionally, even in those transdisciplinary settings, which I really like, even those tend to revert back, regress back to an average of, I wanna stay in my safe place. I wanna stay with what I know and only share those things that I know. Rarely will you find a situation like the one we have here where every single time I walk into one of these live streams, I assume I'm gonna learn way more than I can bring to the actual conversation. And if I come into it with that sort of humble sense that I can't come from a place of everything that I know, I have a few experiences that I can share and say, I think that this validates some of the ideas that we have, but I never come at it with a situation of going, well, I'm gonna school everybody up today. I think it's the interaction part. And I think it's the getting into those places of necessary discomfort because we do find ourselves lots of times in these papers not really knowing where it's gonna end up. And you have to kind of have the strategies and the attitude to be able to handle that. And so when we develop that in people, what does that do in terms of giving them a better sense of what applied active inference really means? We have to go a step back and say, so what strategies do you use right now? What are your default heuristics? When you find those heuristics, as Blue pointed out, being applied in certain social settings and getting rejected, now what do you do? Okay, I'm gonna plateau on this slide elaboration to see what this might reflect on, either of you about applying active inference. So now we've added in another layer. So here's the active inference modeler team making figure two model, making a model where there is intelligence on both sides of the table, whereas here they're making something more like figure one. So in this case, the modeler, in the more figure one like case, there's more of a tendency, speculating, for the modeler to just project their cognition into the model as it interfaces with a more passive or non-agentic surrounding. When we start thinking about making models where there's agency on both sides, then it's like we have that realization that we're also on the other side of something. And so that is some of the implicit, esoteric, impactful interfaces of modeling as development and modeling as personal and multi-scale development because there is a difference with making these two different types of model in what this individual thinks and their thought loops change how they act and their actions are the environment's sense states. So it's like there's a lot of ways to go with it and a lot of loops of cognition and perception and action. But again, it's taking those intertwined and nested and laterally interacting perception, cognition, action, loops, taking that as a starting point, not as an ending point. So we don't need to talk about how it's not that way. We need to talk about applying active inference so that it can be a starting point. And it is a starting point and we're on the journey. Anyone who is paying attention is, but what happens as we continue in that journey? Are either of you gamers, do you gaming? I have a little bit, but like as a teenager or something, but not so much anymore. Because in that environment, I don't do it at all, but in that virtual environment, I have a number of friends who have picked up the virtual reality here. And they absolutely love it. In that kind of an environment, they are neophilic through and through and they want to share that with me. I mean, they want to invite me in. They want me to kind of join that world of playing virtual tennis with them because they just have so much fun in there. So a big part of this, although it's not my cup of tea anymore, the robots are, I can see where you can take these basic principles of applying active inference. And it doesn't always have to be, pick yourself up physically and board an airplane or whatever. You can do it virtually as well. It's just that knowing when those conditions and circumstances are such that people want to move concentrically and circularly around a certain set of conditions, whether limited by the number of pixels or limited by their wanting to control their carbon footprint. But again, the applied active inference part isn't necessarily something that's always, I have to get on the big rocket and away we go. Sort of thing. I think this does take us towards this question of wanting to apply active inference or wanting to be applying active inference, expecting and preferring oneself to be applying active inference. If you don't expect and prefer that, fine. Do what you expect and prefer and you will. But then what brings one from seeing a phenomena from being curious about pay phones and technology adoption in this or that region of the world or wanting to understand how moving a bacteria from one temperature and salt concentration to a different niche, how to model that or all of these other processes that come up as examples or as ancillary to some of the discussions that we have, how do we go from that phenomena that has taken our interest that we've distinguished that phenomena from the other phenomena? Like I'm talking about the rotating of the hurricane. I'm not talking about the fact that the clouds are gray or that it's 21,000 feet off the ground. Like there's a phenomena, not the only phenomena, but there's something and thing ends with ING too. So some thing that has come onto our radar. And then how do we go from that TH-ING or other thing towards active inference models? And that space before the modeler has even written down a Bayes graph, before they in their journey may even know what a Bayes graph is or a way to interpret it, let alone the minimum of two ways to interpret it. What's happening in that space? And is that applying active inference? Even if it's only recognized later that it was applying active inference? Well, I don't know much more to say on it, except the fact that I think that a lot of us are trained in formal education settings to never actually really apply active inference in terms of that formality around what we think we, what we predict we have to do in order to come out the other side of that particular kind of formalized and structured learning journey. That's what I know from my own experiences. I mean, until there was an active inference group, something formally organized around the concept of active inference, I wouldn't have even, I wouldn't have, I would not have guessed, I would have not have predicted that there would be the kind of interest that we are now trying to evolve and support. It's, this is still very nascent in terms of its acceptance and its uptake, if we're gonna talk about uploading in terms of what kinds of alternatives we could at least give some consideration to. I can show an image from the textbook, from the active textbook and kind of reflect on that. And this is just, this is the appetizer of the appetizer because of course we're gonna have the textbook group and multiple cohorts of that where we like go through it and the textbook is an appetizer to the appetizer. And so just, it's a fractal, it's a crumb. We've been talking a lot about graphs that look like the one on the top. And so this is the same kind of partially observable Markov decision process written in a slightly different way, slightly different labeling like a three instead of a D and a two instead of an A and a B here and et cetera. So same but a little different. And in this type of model, we talk about like the state at a given time, S of T and then we talk about the transitions of it being different at a different time point. Like here is the past and then the transition to the present, the present, the transition to the future. And then at each of those time steps, there's like the emission of some observation. So there's the dynamics of the translation of the underlying hidden state and then there's the emission of observations. That's the part that makes it like partially observable and then there's action in the loop with pi. Okay, that's the top one. The bottom one is clearly drawn if we can have a theory of mind of the authors to reflect some isomorphisms or similarities structurally between this top representation and the bottom one. And they discuss, we're going to appeal to these throughout the remainder of the book. Okay, so the tops, the POMDP sequence of states evolving through time index by the subscript. Bottom, continuous time model of the sort implied by stochastic differential equations with prime notation indicating temporal derivatives. So here X is like the hidden state now. X prime is the rate of change of the hidden state now. X double prime is the second derivative, so not just like the velocity but the acceleration of the hidden state now. Each of those are emitting observations in this Y prime space. But now three, the B up here does not reflect how things change through time but rather how derivatives are taken now. And so this is like time series planning with time series inference. Where should we be in five time steps? Well, let's explore and roll out where we could be in five time steps. The bottom is more like a Taylor series because we're only staying with two feet now and then we're just asking, okay, where are we now? That's our first fundamental frequency. Then where's the rate of change and where's the second derivative of the rate of change? And so we're approximating a future unfolding using only the generalized coordinates of motion in one time step. And so there isn't an explicit modeling of any future time points at all. It's not X prime of T plus one. This is all about the derivatives of motion at this time point. So we've been having some interesting conversations. What kind of thing allows these representations and then thinking about what kind of thing that is and actually how different these representations are despite being able to be laid out graphically in a similar way, there are more in this family of graphs that we haven't seen. Even if these two uniquely identified one thing and that one thing was the one thing, there still would be other representations we haven't seen. So blue. So your way ahead of me in the textbook because you're a good student like that. I haven't come to that picture yet. But I think it's interesting that when things are changing rapidly, we notice them more. And when things are changing slowly, we don't notice them at all. So what effect do the first and second order derivatives have on our perception? Like a frog being boiled in hot water or in tap water if you take it room temperature water and slowly increase it, it doesn't really notice that it's being boiled alive. But if you take it from, it's cold, nice cool pond into a pot of boiling water, it's gonna big time notice. So I wonder, does perception increase with first and second order derivatives or the sensation increase, the magnitude of the sensation or the perception? When some things change so fast, they again become imperceptible, like a helicopter blades and then it becomes again or like a frequency that just becomes inaudible. Is it like parabolic? Like, you know, we get max perception like when the rate of change is between this and this and then no perception and no perception over on the other side. It's like tuning in and tuning out. There's some wavelength that's being optimally perceived and then there's trailing off. So now let's go back to figure two and think about blue like you brought up at the beginning, like improvisation. Now, when we're doing improvisation, do we, in what ways do we do time series modeling to explicitly predict how I will be and how they will be in the future at specific future time points? And or if we are uploading, not just offloading, what if we just have to, where is their arm? Is it accelerating? Is it changing how it's accelerating? Rather than where will their arm be in 30 minutes? One can just say in the very short term in a playful space, how can we just listen to how things are, how they're changing and how that's changing and so on. How does that move from playful contact improv into digital stigmergy that makes useful productive technologies? It's another question. One of those questions isn't better. It's awesome that we have a framework that helps us address both of them as well as other questions. What happens when we think about offloading and uploading using the structures that we've seen? We won't go into it today, but like we've added this extended perspective to what will be evoked when we hear about ACTIMF in the future. It's like charging that term with more and more ways to relate it. Like it'll be interesting to find out. There will be epistemic and pragmatic value in finding out, Dean? Do you, Daniel, because I haven't seen this before. You've put it up here today. You've got the book, but I've just read the introduction. Do you think going forward if we're going to get to a place where we're gonna feel comfortable in terms of developing the conditions around which active inference is more easily uptaken or adopted or given there's a certain confidence around using that methodology to inform what's on the other side of the yet to be observed. Is there a Monte Carlo aspect of this in terms of just having to play out a few hands and getting comfortable with the fact that that takes a while? Is that perhaps why the diachronic piece was introduced and kept right near the end where the remarks are in the sense that we all seem to be time constrained and we wanna make sure that we're getting value for our dollar from our ways of doing things. But if we're gonna look at it from the perspective of distributing cognition in the globe, there was still going to be some ad hoc aspects of this that kind of play out Monte Carlo style. Like you went to one performance and then you went to the next performance five hours later, would you go in with an expectation that you're going to see an exact copy of what you saw before? And does that change the value of being now part of that collector or that constituted cognitive niche? So maybe one of the things we have to do is ask ourselves if we want people to be able to take this on, the idea of active inference and extended, making it safe for them, not to feel as though if they don't come up with the correct answer out of the series of four that that's not the only way to measure whether or not you're gaining confidences around how your generative model gives you a chance to maybe decide who the two finalists are in said competition, still six weeks away from now, right? Like, and I don't want everything now to become about online gambling, but I mean, there are actual things that we can find in the niche that are currently there that would give people a better sense of what information they may or may not need in order to be able to make, open that basal ganglia or close it, because there's a lot of times we even know that we should close it, we just should not go forward. So yeah, again, I think it may be some of these things come back to how much time we're willing to spend on it and whether we see the investment of time at the front end paying off exponentially at the back end or whether we just stick with those methods that we are familiar with because we know at the end of the 13 week cycle, we can say we've now accounted for X amount of a download or an offload or a dump. I wonder if that's part of where we're at. This is only a partial alignment, just a yes and to what you said, it's not a direct example or a contradiction or anything. But if we think about the tech adoption curve of cars and advanced modern technology extending back infinitely, it's like, it asymptotes back infinitely. When I hear you talking about 13 week, quarter educational paradigm, the instructionist paradigm which may be able to get you from three out of 10 on the week one assessment and then we have a strategy that gets this percentage of people to seven out of 10 on the week one assessment in this set of instructions. The teachers get the instructions and they give different instructions to the learners. Then you said in contrast, there's putting the time in upfront and receiving super linear reward. And so it's like the long phase where the technology is developing and that's sort of happening in someone's life. And then I saw that as connected to the gaming question. And so we have a lot of the tools for that micro gaming, getting people to engage in instructionalist activities has been gamified to a fine art and science and it will become more intense in the coming years, we can expect. Even if we cognitively say we don't prefer that. So how do we interact, not just instruct in a way where somebody who is spending one hour a week on active goes to 10 and somebody who's at zero goes to one but we don't need to take 10 to 100 probably most of the time and how much time should people be spending on active? What should they expect to get out of this? What do people expect from listening to live streams? They should comment. If they're listening, it's a special request. They should comment for what they do expect. That would be interesting to know because otherwise it's us acting without that much of a sense back from our environment. And so that puts us more into a figure one paradigm rather than a figure two paradigm even if the material is so massively nuanced and interactive itself. So there's definitely a lot on the table. Hopefully the table is set such that those who are 5.8 hours in to paper number 41, that they're more intrigued and motivated by rather than reflexively looking for that two minute explainer video because some of these ideas may occur at a cognitive pace that is slower than that video in its whole and require regimes of attention that are not amenable to being driven in the same way as addictive repetitive behaviors in other like micro mining educational situations. Blue? So there's a whole science behind the gamification of everything. And I wonder if our ability to do that in an educational setting, but also in a work setting. I mean, they like, you know, have performance metrics and everything. I mean, we have metrics for posts on Twitter. How many likes do you get? How many people liked the photo you put on Facebook? I mean, we metric, we have metrics for everything. And so we can gamify. I mean, we gamify things for ourselves but things are externally gamified for us. And I wonder if it's successful and it's very successful. I mean, I don't know that it's optimal because I think it leads to like other problems, but does the gamification of everything, is that like a replacement of our preferences for just like winning the game? Like ultimately like do like, you know, I would prefer to be happy. I would prefer to engage meaningfully with my colleagues and, you know, fellow students in the class, but instead we are like in this weird warped gamified way that we have to like everything conforms to that. Even though it's not aligned with my preferences, I shift. Like so the preference for taking the game just like, or for winning the game just like takes over your entire model with all of your other preferences. Like why does that work? Or how, I wonder like, can we active inference that? Like the active inference of gamification, that would be super interesting to just look at. Like what happens is that that one preference for just like winning, ticking up a notch or like gaining more metrics, is that just overrides all of your other preferences or it just like hits harder with the dopamine or what's going on there, right? Definitely an important area to model and it reminds me of Mark Miller at all's work on yes, podcasts and paper. So people should for sure listen to how it's framed by him and by colleagues, but like there are the drives and the preferences on like thirst, like blue just enacted or for temperature. And so within each drive, there is like a pretty clear preference like to be in a bounded range or like more is better. But then there's a second level process by which one is asking are those optimizations across domains happening better or not as well as I'm expecting? Like is this a balanced strategy where I just, every time I get to 98% on my phone's battery, I'm always charging it. But like then all of a sudden I'm neglecting all this other stuff. So I'm too attentive, for example, to the battery drive to a point of it not being any more adaptive of a policy selection than just waiting till it's at 70% most of the time. But the uncertainty about those times where you do need that 98% battery is what compels people to make these optimal decisions that may neglect other drives. So being able to model that, what's called gamification. And then the other piece that made me think about is we don't get to see figure one and figure two. We don't know what's on the other side of the blanket. If it's mere or adaptive or what it is. And so it's not like we'll know figure one when we see it and we'll know figure two when we see it and we'll have a super clear strategy for when to engage in the figure one way and when to engage in the figure two way. It's more like, and potentially this is what learning and applying active inference in a personal capacity may look like for some, what does it look like to engage with figure two as your null hypothesis rather than figure one as your null hypothesis and we'll have closing thoughts with none of you having the ultimate phrase. Dean, please. I think what you're describing, both of you are describing is maybe that service piece as gamified, which isn't necessarily cooperative games, but that's essentially what we're talking about. So that's, and I'm glad I got it in because I hate the final word. Lou? Yeah, thanks. It's interesting. Uploading, cognitive uploading. What amount of uploading has been done by gamification? I wonder that, like, that's a curious thing for me to now think about. I should be thinking about other things, but now I'm gonna think about that, so thanks. Steven wrote in the chat, gamification pushes induction over abduction. Favors inductive logic and playing within the rules of the game rather than abductive logic, which I hope we'll be able to talk about later. So the participation is tied into a reinforcement learning paradigm that limits meaningfulness because it's like if game is about pragmatic value, then mine more. But if the game includes an epistemic and a pragmatic component, it's already opened up. Now, what if it contains an improvisational and participatory element because it's not just like we expanded the pragmatic value into your episteme and your pragmatics, but their service to another and to the environment. And so maybe that is a good place to end slash begin with 41 and we'll head into a whole another set of topics as we move forward. So Dean and Blue, thanks a ton. Great discussion and great and special times and thanks, Steven and everyone else in the chat. So peace out, fellows. Bye. Bye. Thanks, team.