 All right. Welcome everybody and thanks for joining. Today it is June 9th, 2021. And we're here in Acton Flab, MathStream number 3.1 with Dobson, Saffron, Knight, Prentner and Friedman. Today is going to be a fun discussion. We hope that you're also able to, if you're watching live, ask us some questions because certainly we don't have much planned and would appreciate from that feedback with you. We're just going to jump right into it and by way of introduction and welcome, ask what is active inference and how we got here. We're going to go around and give just some initial conditions, initial thoughts on these questions from wherever we're coming from as we sort of converge on this discussion. And then we'll just be writing down thoughts and ready to jump in and continue the discussion. So who would like to give a first pass on either or both of these questions? I could try. Thank you, Adam. Go for it. So for me, active inference is a normative process theory for how a principle of what it takes to exist, the free energy principle or what it takes to persist can be realized. And it's what are the things you have to do if you're going to, have you want to say it out, outsource your entropy or avoid getting all mixed up in the meat grinder of existence or whatever, what kind of intelligence, what kind of modeling will you need to exhibit to do this? And so in a way, it's for me, the answer to the question of what is mind and potentially with certain versions of it, what is life? And with other versions of it, what is consciousness? And maybe even what is free will? Nice, big questions. And how did you get here? I have no idea. It all started with an existential crisis when I was young that went on for a very long time. But then I would say, I found my way to the architect, a.k.a. Carl Friston. When I was basically through Jeff Hawkins book on intelligence, I learned about predictive coding and that was my honor app to Carl's work and rest his history. Awesome. So we heard some big general ideas. What is mind? What is life? Who else would like to go? Or I'll give a thought. Lou, wanna go for it? Sure, I came to active inference through my interest in scaling. So how do you scale processes from very small, like subatomic, to cellular, to organismal, to societal? So that was kind of how I got hooked into the active inference framework. And for me, it's a framework that pretty, gives a pretty good representation of how we tend to represent and think about the world. We predict, we have feedback from, like we predict something will happen. If I turn the thermometer on, if I turn the temperature up, the heat will go on. So we predict that, we take the action. And if the heat doesn't go on, we have a surprise. And so then we go, well, what made our model go wrong? Like maybe my heater is broken. So it's kind of minimizing the surprise and the uncertainty in our interactions in the world. Awesome. Would either of the two of you like to go or? Sure, Shayna, do you want or should I? Go ahead, I'll go after you, go ahead. Okay, well, so I'm perhaps in some sense the outsider because I'm not doing research with active inference, but I'm a philosopher in the background of philosophy. And I think active inference is a very interesting thing. First of course, I like these big general questions that Adam raised at the beginning, what is mine, what is life, what is consciousness. And it's very close to what I normally think about. I'm a philosopher, I'm a bit skeptical whether one could actually answer this big question in a very different story. I think active inference is very interesting from a philosophical historical perspective as well because it kind of embodies a tension which ran through philosophy or through the history of thinking, which more or less started with a very famous philosopher, Emmanuel Kant. I mean, the distinction between an object or a world, he called it the world in itself, like an outside world and the way how we represent it. And now we can go two ways from here. One, one, one could stay in this Kantian paradigm where the question is how do we actually perceive or are able to make representations of the outside world or one could go into the different directions and say, oh, it's really about acting in this world. And active inference is a kind of interesting twist mix between these two ideas. And my experience with it that some thinkers in the active inference camp, they tend to align closer with Kant or in the new Kantian tradition. And the other line of thinkers, they tend to align more closely to this perception action way of thinking. So I'm very interested in this tension, maybe to think about how it could be resolved. So I'm curious. And how did I get there? Like, well, I worked a lot with Shana here and Shana basically got me in. So yeah, thanks to Shana. Thank you, Robert. Shana, go for it. Yes, so right, what's active inference? So I too, I'm probably more outside than Robert is. I'm the math person here. So I'm just sort of off in the woods trying to think about, like Lou said, how to scale all of these theories. So you have this formalism of an activism and inference based on sort of discriminating between system and its environment. And I think that's predicated upon a kind of a very serious boundary argument. So I'm really interested in boundaries. I'm interested in also how boundaries are predicated upon a possible holography theorem. So when we say agent is interacting with its environment, how does that actually work? In a categorical way. So we have a Bayesianism probability way, but I'm sort of interested in how do you do that categorically? And can you mathematicize that even more and sort of infinity categorify that? Perhaps like the definition between inside and outside is I think a very, very beautiful one. And so how can we actually say that, what is represented in the outside is the inside inferred and like what are the mathematics of that? Also this seems to be happening in a certain version of a simultaneousness. So I'm really interested in the mathematics of simultaneity and also this sort of synchronic and diachronic emergence sort of phenomenon. So I think that what's happening also that I think the active inference formalism is dynamical enough to allow for these sort of mathematical ideas to come in. And if you can mathematicize that then perhaps you can answer these questions that Adam's asking, what is consciousness? What is mine? And those are the heavy weights and that's I think that's why you have the heavy weights at this panel. As opposed like for the question, how did you get here? I'm trying to figure that out myself. So I didn't know if that meant like existentially or whatever. Somehow I just kind of came online. If you wanna know how I got like in this interface, I don't know. And then, but my very good colleague, the fields gave a talk and I sort of attended and he was asked a math question and I sort of answered it and then the community like welcomed me. So that's how I got here, but I don't know how I got here. Awesome, thank you. And yep, excuse any strange noises in my background. What is active inference? I really liked just a bunch of pieces that people brought up. Robert, you brought up that there's this action elements, this pragmatic turn, looking at action and perception in the world. And then right there, the second word is inference. And that's that transcendental or like a Kantian type approach. So when I look at active inference, I see action and inference sounds a little first level, but those are the two pieces that are coming together, which is traditions of action in the world and traditions of inference in the world. Now, as compares with reinforcement learning or other approaches to thinking about agency and action in the world or other economic metaphors, active inference diverges in a few key ways. It focuses on the reduction of uncertainty rather than the maximization of, for example, reward or value alone. So uncertainty reduction is a key component of active inference and also what Adam brought up, which is the resistance to dissipation and the characteristics of living systems to be actively resisting by making increasingly adequate generative models of their niche resisting dissipation. And then also I see active inference as a bridge to disciplines and communities such as systems engineering or ecological psychology or embodiment. And so all these different areas that relate in one way to action and inference or to philosophy. There's so many roads that come together at active inference. And that's a little bit about how I think I got here as well. I remember hearing about it for the first time from Jelle Brunnenberg who came on an earlier stream. And it was a picture of, I think, a Dutch person holding a cup of coffee and listening to music and biking with one hand and asking about this question of skill and of action in the world and just raising this provocative notion that current models that we had for action planning weren't doing their job. They weren't able to explain that person on a bicycle. So since then I've just been learning by doing. So cool for this first question and I'm sure we can return back to it and ask more questions about active inference as we head on. And of course anybody who's watching live is more than welcome to ask us a question which I'll relay on. Let's go to some of our questions that we've thought about before. And this slide asks, what is the role of space in active inference or in other areas that people are familiar with working and what is the role of time? And then we have that non-locality for both space and time because maybe we're curious about what non-locality looks like in space and time. So anyone's welcome to raise their hand or give a first thought. Danielle, can I give a first thought? Since that was, yeah, yeah. So in the probabilistic setting and the statistical mechanics, that's the mathematics that fuels active inference, right? And before you have me trying to come in and categorify everything, just for the active inference kind of experts in the crowd, I was going to ask you, the role of space, like what is it? So are you modeling all of this on some kind of European sort of manifold or are we looking at something a little more interesting? So where I come from, I'm sort of, if you really believe there's only now, then what does that actually translate to? So if we're gonna take these sort of powerful philosophies of like, if there really is no future and you only act or the future is some kind of potential that you only actualize now, can you translate that for me into what your space looks like? So quantum mechanics is working over sort of non-archimedian space where you have a very severe cutoff. You don't have these infinitesimals and things. So what does the space actually look like for someone like me who really believes that this sort of interface is discretized that every other nanosecond it does not exist like whatever that means. So this whole thing is a grand interpolation. I just wanted to know what y'all thought about in this inference model, if space is actually jagged and singular or if you're assuming that it's some sort of interpolated continuity. Nice question, Adam, you ever thought there? A few. So most of my thinking has been in focusing on goal-oriented behavior, consciousness, agency, pursuing goals. And this is in general the, that's what I'm gunning for and that's like all questions end up getting filtered through that for me. But, and so for instance, and I could see space and time in different senses showing up in different ways. So in theory of consciousness I recently proposed, I was suggesting that we can think of it in terms of world modeling, but for this world modeling to actually be capable of bringing forth the world or being experienceable, it has to have certain coherence properties. And so I was thinking of constant that it got priori categories as not just preconditions for judgment of space, time, cause, maybe some sort of unifying minimal selfhood. But these are also preconditions for any kind of sense making whatsoever and for the appearance of a world. People could object to that. But then the question, what exactly do you mean by space, time and cause and self? How much and what kinds and which circumstances? And so for me, space, I tend to think of it more in terms of locality or there's like relative degrees of proximity among things. And then there's, for time, it would be things changing within space in a kind of proportional sense. Like there's other senses of time where you could have some sort of clock-like processes that are either in parallel and coming to agreement or centralized in another sense of certain bottlenecks that force a kind of temporal ordering. Or you could just thinking about things changing to things at different rates. There's, I think, problematic circularity in what I said. But in terms of the senses of space, well, I'll stop there, but I could keep going later. Just a few, I guess, basic thoughts on space. I thought about embodied agents and how an agent has a location in a spatial model. So like the active inference paper that we had where different ants were at a different location in space. Now, there's a difference between the space that the actual physical thing is in and then the model, which kind of relates to this Kantian distinction perhaps. But even when we have a disembodied agent, there's a state space. So I don't know what is the relationship between a mathematical state space and whatever you were mentioning there previously. And then as for time, it just made me think about the distinction between Kronos and Kairos, where Kronos being like sort of a metric or a measured time, like a model of time or something that's regularized, and then Kairos being timeliness and action happens kind of in both. Action takes clock time to carry out, but also action has its own time scale and actions have their own sort of story arc. Yeah, Blue, go for it. And then anyone else who raise their hand. So space is something that I've thought about a lot relative to like my previous work. And I think about it a lot also in terms of boundaries, right? So where does one thing stop and another thing start? And even something so simple as like, how do I define like my fingernail? Like, where's the boundary between like your fingernail and your cuticle? And anybody who's like really dug in there, like there's some parts of your cuticle that you're like, is this, am I supposed to cut this or what? Like, so there's not a clear distinction or line that can be drawn around what is my fingernail. Right, and so trying to find this distinct boundary also in terms of inclusion in a group. And I listened to your last math stream, Shanna and Daniel. And I was really kind of thinking about this and I've been thinking about cancer a lot in terms of inclusion in a group. As you guys were talking about that. And it's like, I also think about it as like a school shooter. Like a cancer cell goes rogue. Like they're somehow not included in the group. They're not part of the collective anymore. And then they take on their own like self-interested agenda. And I also think about that, like in terms of a school shooter or someone that like commits mass violence. And, you know, like when you have this kind of, it's someone that's not included in a group. It's someone that even is not included in like a rebel group, like the al-Qaeda or something. They're rejected from that group also. So when they can't find inclusion, they become like this rogue agent. And so kind of I think about space as in where do you draw that line? I have a hard time finding that space and it's something I like to push on. And then time, I don't really work with time too much like actually per se other than like I have to deal with like this linear time that we're trapped in. But I do think about expansion and compression of time relative to age. So like as a young person, like time seems to be so expanded like five minutes is forever. And as you get older in life, like a year just flies by, right? So like as time, it's expanded as a young person and then compressed as you seem to age through life. Thank you, Blue. Robert, then Shauna. Yeah, thank you. So I am more of a question than something is really interesting to say, but it's a question for you or maybe you can give me a good answer because here on this slide, there's the second world, which is in parentheses, which is non-locality. And my guess is that Shauna put it there. My question would be whether there is anything that remotely resembles some notion of non-locality in active inference. To make that a bit more concrete, my impression when I read articles by people who endorse active inference or work in that area, is that they usually work against the very classical notion of space and time as a background. So you mentioned the ants that move in physical space. And that's usually assumed to be a classical one or if you, I don't know, if you think about the cell and you apply the concepts to a cell, then you say, okay, the molecules, it's a classical thing. So there's no notion of non-locality there, I guess. I don't know. And my question would be how weather and how active inference could accommodate some notion of non-locality. Nice question. We can definitely wander and come back to that. Shauna and then Adam. Yeah. Thanks. So just to get to that time part because I hadn't talked about that. So thanks, Blue, for watching the last stream. I'm really fascinated by time and I don't think my time really works linear. It's really hard to say like, where is yesterday? You can do this sort of light cone sort of formulation and if you want a general like, relativity it, then you can say, oh yeah, yesterday is in the light cone of whatever, you know. So, but like, what is it and where is it? So when I asked like about the rule of time, the thing that I'm sort of, that I've proposed in that FMOF K theory, diamond paper that I have about like the applications of all this stuff, of this higher category stuff, is to think that maybe time is more like a pro time. So in math, you have this pro object which is glued together from a bunch of like morphisms and other collections of it. So it seems that if time is this thing that is also emergent, I'm the advocate that it's not fixed, right? I'm also an advocate that I don't really know how you measure time without constructing it. So I'm not sure that we're actually constructing a notion of now or does it actually exist? So in active inference and you're constructing, you know, every second of whatever, every unit of time, you know, how can you actually measure the inference without constructing a parallel notion of time like in the same thing. So like when Adam's talking about coherence and you know, if you need a coherence property for temporality to actually give you a singular now or something, what does that actually look like? So Robert's right, that was me that put the non locality in there because we have like a rough idea of a non locality of space. You can have a two point perspective of space, something like, you know, my, in my state where I'm not like an entangled pair of photons, I can clearly say, oh, Silver Lake is over there and I'm in my condo on the other side. There's a sort of separated distance, but according to a pair of entangled photons, there is no separated distance. And so you have a two point perspective of space. So in the mastering that I did last time with Daniel, we were, you know, trying to figure out is there actually a two point perspective of time? Can you actually be a macro system and say that this is purely non-local? So I really like Robert's question saying, where is the non locality? I'm trying to build that into the very mathematics of the thing, of trying to categorify this awesome formalism. But yeah, so just time I think is, it's very strange. And I think it's something that we should all probably try to tackle instead of assuming that it's always there. It's just some background phenomena, but if time is emergent and space is emergent also, then it sounds like active inference is the right dynamical formalism that could actually, you know, kind of help that too because maybe the agent is also making time at the quote, same time that it's making, you know, a world. Thanks. Yeah. We've seen active inference models where at each time step of the model, the agent does like a rollout through the future that's called sophisticated active inference. And then also it's a great point about the multiple reference points. We have two eyes at different spatial locations. And that's how we infer depth, which isn't just a feature of the world because it's something that we have to infer and therefore it's susceptible to optical illusions. And then the question with time, do we have a monocle? We only have one reference point on time because then we're gonna be just totally unable to calculate depth. Or if we can calculate or experience temporal depth, does that entail that we're having multiple reference points in time? So Adam, and then anyone else? So to try to bring a few threads together. So some of the, so what I'm really liking about what Blue and Sham are just saying is this sort of relational realism. So instead of saying like, you know, what is time, what is space, period? Well, what is it relative to what? And then, you know, what is it? And when I say coherence, coherence relative to what? And so it's kind of backtrack some. And the Kantian issue of like, is it that we're just interacting? And or is it that we're modeling? And the modeling is just a kind of a pretense that's useful for describing this like sea of forces that goes beyond description. I think we can have both. And so what I tend to do is view interesting systems in terms of an active inference also like you have Nest and Markov blankets. And you have nested kind of nested hierarchy of dynamics. And so you could think of, for instance, a set of coupling processes, some sort of chaining together of dissipative systems that manages to stitch itself, you know, hold the cell together through the way it's chained. And this could couple with the world in a way that doesn't require necessarily, there might be like an implicit representation to it. If you want to use the word in that way of like a correspondence between the dynamics and like what is evolving. Or you could, that's like, but it's not necessarily, it may not be inference in the sense of like the model, like whether you wanna see that this modeling process is real or a way of viewing things that's a matter of perspective. But then as you add layers of complexity, then you can get things like interloop processes that could refer to this set of kind of action relations and bringing things together. And then you could even maybe have on top of those or within those another process, which points to those. So graphs pointing at graphs, pointing at graphs. And so like Shannon's like category of theoretic perspective would be very relevant here. And so at some levels, you might talk of things like representations in a Cartesian sense, like in good old fashioned cognitive science, we may have those. And we have an activist intelligent coupling where it's just implicit representation and maybe we shouldn't even use that word or maybe we should use that word. I'm not gonna say, but I think we can have it all. And so the way that you think like space and time, from these nested systems, I think there could be in a way a kind of non locality and that they would be evolving independently. They're evolving on their own time scale. So you have like a nested deep hierarchy of dynamics. There's a sense which they're all kind of brought together by virtue of like sharing this Markov blanket that they ultimately agree. So that I guess, I don't know if that would be, but there's potentially ways in which they can diverge also. And so it seems like you could have maybe periods of non locality and locality. And actually this kind of going in and out and probably butchering and abusing the terminology, I apologize, but this going in and out might even be like necessary, like necessary for a kind of dual phase evolution for the systems to couple, for them to like trade off their free energy, for them to do what they do as this sort of like iterative inferring their own existence and doing what they need to do process. But so within this view, it seems like time can show up in multiple ways. There's like the time of just the world as this evolving graph as a generative process in that way in terms of if there's a kind of with it, if we narrow it in to particularism, like there was like some things were before and some things were after in terms of this sea of causal influences. But then you might get another kind of time of the psychological time of spatializing time of and then navigating through spatialized time and then saying things for space and then it comes in again. One more thing I guess I would say is the issue of space and time, I think it's gonna be two things about that. So one idea I'm playing with is actually that our consciousness is actually the spatial kind of it in terms of visual spatial, at least, it's actually always 2D. And that to the extent that we have something like a 3D weariness of space, it's coherent state transitions of these 2D projections. And there might be a relevance to the holography there. That's one thing to put aside. And then one more thing would be, and then you have maybe like a 3D type space in terms of body space. So you'd have like a visual spatial Cartesian space but then like space relative to the body, maybe like a kind of quasi polar coordinate space. And between the two of them, you can get a kind of like 3D space coming out of this or 4D space because it's dynamics. But the one more thing about subsuming space and time to action relative to what is I think this will be crucial for boundaries in terms of what counts as a living system or not and what would count as a conscious system or not of what kinds? What are the physical and computational substrates of consciousness? What kind of closure is achieved on what scales? That issue I think, so what I would like to have happen is either some sort of definitive handling of this, some sort of like category theoretic definitive handling that would basically nail it. Hard problem solved. Schrodinger's question answered unambiguous or I want to prove that this is impossible and then I can rest. Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha. Thanks Adam. Maybe Robert was alluding to this with the skepticism but for me whether something is answered or not it's a psychological story that we're telling and you could be convinced about something that whether it's proven to be impossible or proven to be ruled in that would be a story and then also nice about the projective geometry. We talked about the projective geometry paper some time ago about how we can look at something like the corner of a room and even though the angles are going in weird directions we kind of infer it to be rectilinear. So that's pretty cool. Any thoughts here? Quickly on the projective geometry actually. I really love David Rudroff's work on that because it's like putting, it's a discussion of consciousness that like puts a person or puts an observer in subjectivity and interests like right at the center of it. One thing I'm wondering about is what is like the realization of this sort of projective geometric perspective engine that he's thinking we have for spatial awareness. It seems like there could be like a sense that just that this is something the brain can do and like there's some proposals for something like this like Hinton has capsule networks. New Mentor has like their thousand brains that like there is a kind of reference frame dependent modeling happening in a distributed way and that's part of the architecture. They're just like somehow through it it works as this graphic extension. I actually think it might be a more inactive type thing where actually the reason you get this projective geometric perspective capacity is because of this coupling between body space and visual space linked by different kinds of affordance relations and that this is actually part of what's letting you do these state transitions and do this kind of projective geometric modeling rather than it being baked in in a distributed way like some sort of graphics engine. But this is like another way in which like for instance embodiment and interest and value kind of other things get subsumed to that like. Thanks and anyone can raise their hand. Just one thought on that is action requires time. Action is almost defined as a sequence and then we have the policy variables the pie that's our kind of active lab pie because policy is at the level above a certain action state but action and policies they do take time. So there's something time like built into our models because what we're talking about and the inferring the state transitions just like you described all those are temporal and then the non locality. It's kind of like when and drawing on for instance older work with dynamic causal modeling and statistical parametric mapping you have an image of the brain that's changing through time and then you can make one type of connected graph with pieces of the brain that are anatomically connected. And so when people think about space usually that's what they're thinking about things that are touching but then also you could have effective or functional connectivity. So you could have regions that are local in cause but that local link in the causal network could be due to some upstream factor that's influencing them both. So causal proximity or functional proximity may or may not be the same as some things that's touching. So not sure where that puts us but cool stuff. Daniel can you say a little bit more about like with regards to what you just laid out with action requiring time and the minimization of surprise. So is there something in your in the active inference formalism when we say minimization of surprise is there an agent that is creating scenarios like dinosaur there's no dinosaur in silver they can be a dinosaur I don't not want dinosaurs so I'm going to act in such a way as to preclude the very possibility of dinosaur. So when we say minimization where is that coming from? So is that cut again is that is there a profound creativity that is the basis of this system that is launching these probabilities cause Bayesian probabilities doesn't like it seems to kind of dull right it never says that there's this highly creative being that is imagining these scenarios that they wants to preclude. So can y'all tell me like where's that coming from or has the agent seen a dinosaur in LA walking down the street and they go I don't want to see that again therefore preclude therefore action like the action of minimizing surprise requires time but it's a different it's an imaginary time it's sort of like I don't want that to happen again therefore I'm going to act in such a way as to preclude the very existence of dinosaur. Does any of that make sense? Well as for dinosaurs we know that they're in the land before time so that is you know clean and done. Well it's something else scary minimizing surprise you know. Yes as for the minimizing the expected free energy or minimizing the surprise one of the sort of tricks of active inference is to condition that upon a policy. If you look at the topology of the graphical model policies are getting plugged into the state transitions between hidden states and then what's being minimized is the surprise of the sensory outcomes that are emitted by the hidden states. So at the first pass without getting into this like meta Bayesian creative energy the agent has a set of policies that it can carry out those are the affordances and then conditioning on each of these policy options if they're discreet like I could turn left or turn right and then I'm going to do a conditional expectation on turning left as my policy or conditional expectation on turning right on my policy which one of those is going to in my prediction of how that policy changes the state transition matrix what is going to result in less surprising sensory outcomes that is for a sort of fixed set of policy options that can be conditioned on but it's a great question about how do we go from the known into the unknown how does the agent learn a new affordance or maybe even generate a new affordance internally so there's probably a lot to be said on that but one way that active inference speaks to it is that when we focus on action hand in hand with inference we can think like how could the agent ever navigate a new kind of physical obstacle like there's some kind of a wall that you've never climbed before but what has to be inferred isn't like what's on the other side of the wall to the nth degree the details of the wall it's enough to have a model of bodily action and a preference for being higher up and potentially the affordances that the agent does control it does know about those things like the movement of the joints that will help it deal with the unknown because it's not just approaching uncertainty as like something that has to be eradicated by determining exactly what's out there in the world but uncertainty is something that's actively negotiated with through usage of the affordances that the agent already has and when you say that are the affordances constructed are they conditions of the system are they properties of the system like what is the space of affordances and is there like a topology on the affordances because I'm still like if we categorify any of this there's a few people who can choose spaces like this like all you know or in the reflective subcategories how I always talk about these objects have reflections and I'm still not thoroughly convinced that there is no duality between a system and its environment right so if they're working together perhaps these are like you know duly in a sort of mathematical sense so if that's the case then I just want to know or affordances like a property of the system constructed or conditions um how like do they change radically is it dynamical nice questions the instrumentalist take so active inference is a statistical model that we made and it doesn't make metaphysical claims about what systems really can do it's a modeling approach in that case affordances are whatever the investigator puts into the model as affordances right then the question about what the systems affordances truly are that's a great question it comes back to how do systems apparently negotiate completely new affordances where was flying a plane where does that come into play well is that part of our extended niche part of our constructed niche and so we can think about that in terms of just the same eyesight that we had before and the same finger movements that we had access to before but now it's being deployed in a different niche so the extended cognitive outcome is just radically different so in the case of tooling and language as a tool and and usage of instruments potentially one could argue that it's still the bodily affordances that are being used but higher order contingencies are being learned by the system to be able to play piano or fly a plane so the affordance of turning the plane left can be still seen as moving your hand to the left on a joystick conditioned on an agent who understands that their preference is to move that way and now has this sort of mapping between what they do with their hand on the joystick and where the plane goes but what kinds of new bodily affordances you know can we have that's right and they shake and they do different forms that's what's cool blue and then anyone else with a raised hand so I was thinking about affordances in relation to time and also like in relation to the last mass stream when you guys were talking about how like when does memory start to form right and so like as you're looking at your reflections in the diamond like there's you're looking at your reflections and then there comes a space maybe where you can reach your reflection I don't remember if that was exactly how it was worded but that's like when maybe memory starts to crystallize something like that maybe if maybe I butchered it do you want to step in and correct me no okay sorry so then you know I was thinking about like when you have young kids and they perceive time right I talked about expansion and compression about time a little bit earlier so when a kid perceives time a small child everything is like yesterday or tomorrow right like there's no like concept of next week like then we know what a week is right so like you have these new affordances like the that come with learning right otherwise it's just before or next right like it's interesting and so when do these affordances like does that correspond to maybe the ability to to reach your own reflection in the diamond and I flipped it here to what our thoughts so maybe if anyone has a thought on that what our thoughts are yep Adam and then anyone else but actually before I go um if Shana wants to speak to a blue just said I'd actually really like to hear that yeah go for it but then I have thoughts yep yeah yeah uh okay feel free well I think it's sort of integrated yet you know the diamonds are um very difficult objects right that I was thinking could be the perfect model I shouldn't say perfect a more perfect model of what's going on with memory right so you have these you know cases that I always talk about you know beautiful people like Clyde wearing who have a three-second memory from a retrograde amnesia and so you have to ask someone like Clyde like is he experiencing you know a complete temporal non-locality every two seconds the child right child mine um which I have a lot of right like for a child what are the conditions for object persistence right in their own very state you know like with the diamond things like that I think you can model what it means to have Shana when I'm six you know and Shana when I'm 85 or something why is it the same Shana and I don't think it is like at all um so when you say something like like memory and what are you actually looking at so um I'm you know again if I think there is this duality between like the system and its environment um how can you actually visualize that in this in this sort of diamond mentality and so the thing about the diamond right is that if perhaps memory is some kind of impurity and you'll never actually get to it so if it if memory is like a mineralogical impurity then you never actually get the impurity you just get the reflections of the whole thing and so to sort of take what blue said and just tighten it just a little bit that might be more of what's actually going on that perhaps the mineralogical impurity is that if maybe if you only have a now and that if you're not able to access all the other reflections that are on the diamond you may not have a future and you may not have you know an extended time but there's also something to say that has that diminished like your um vibrant experience somebody like me who's like can really drop in can drop in very delusium like i am a degree of heat i am a degree of white i am a a unit of sand um his formalism is super powerful because it's giving um becoming an action to these individualities these heseides right that somebody else would say have no kind of a unit of consciousness or something so i think what i'm trying to do combine this notion of becoming that is individual super individualistic like you know i am a spoon of coconut like a spoonful of coconut like that's it um with this notion of this diamond being and they're not just reflections right they're like pro finite reflections so they are um totally disconnected so that's sort of the mess that i'm sort of i think is like underlying the beginnings of memory maybe it's just this kind of pure mess and as you age or whatever age is sort of involved in these pro hotel coverings and so in the covering space uh you know if you're working uh through proprioception and things like that there's a lot of work in neurology saying that you know your brain probably processes more teleologically if you're a child and you only have like a now and a bunch of points you don't really have a lot of covering spaces right um yeah so i think working with diamonds which you know we'll get to in a little bit so i think there's something definitely at play with you're staring at your memories right you're walking around in your past like all the time so i don't ever think you can actually get to the true memory and unless you've reached like the infant state again thanks adam and robert so that's to revisit the affordance issue um so if i'm uh understanding things there could be in a way um you could say like in any given moment there's like a view from the outside like what were the what was the scope of the possibility of the actual interactions for the system uh given its configuration but because of niche construction that's also a moving target but then there could be another set of like perception of these affordances by the system where that perception itself changes the affordance structure and this can be of different varieties so like the system in a kind of um uh dynamically self-configuration um and like jostling around in different ways doing like implicit inference just figuring out sussing out what the affordance structure is and then this kind of implicit perception then there could be on another level of like explicit affordance uh perception and each of these influencing the other and maybe operating uh differently and across purposes um with respect to um uh i was just kind of paraphrasing what i'm trying to paraphrase whatever and add my own twist on um this seems like a connected this a lot of things will connect to this um from memory i've been thinking of again back to consciousness and different forms of it um like what do we mean when we're accessing something again what does it take to remember things in different ways and how does our different kinds of awareness of what happened relate to our ability to remember these things and so you could have a kind of memory uh it's just a sort of like baked in on what would they say a hysteresis a kind of like yeah you push on the thing and then through its dynamics or through its its its material properties just kind of holds the shape and you got a memory there of the indentation like a desire count but in terms of this like holding on to memory like a like a clib wearing type memory and uh time does seem key for contextualization and like blue is saying about the way time changes with age it seems like that kind of time and the way we construct it one of the things you would say like we spatialize it and we spatialize it relative to what probably relative to you know us as an agents pursuing goals in the world with this the inference of different affordance structures but okay well what what what are the different ways though that you can relate to this spatialized time and to this uh maybe objectified you uh and then different ways of doing this what are the different kinds of self-reference involved that that's a issue I may begin to later but one of the things I'm wondering in terms of memory uh and this might relate to um like uh impurities in the crystals is that the ability of things to get entangled to point to themselves in different ways so like within this like three second window there's maybe a sequence of frames of sense making but there's a broader frame of sense making of a causal unfolding that has a certain stability over which it can hold itself together but part of what would let this um specious present this um inherent depth to to the now might be this kind of self-reference where um the you know earlier frames of experience have predictive information about what follows but then what follows has information and points to what was in the past to kind of you know post dicting the pet predicting the future and post dicting the fast past as part of this process of unfolding sense making and it seems like the degree to which you could do this and the ways in which you do this that would be the degree to which uh things might likely be likely to stick around that then be might likely to be made uh salient to the fine hooks um and in this process where everything's competing with everything to keep going that the ability of something to get entangled with itself in different ways um would provide different degrees of sticking power um and that this would involve um things like just you know you know multi modality like like like a heterogeneous code like maybe my uh my direct you know first person shooter point of view uh is pointing to like linguistic a linguistic code or some sort of iconic symbolization or some sort of other like interceptive intuitive code but like these all cross cross-referencing each other that'd be like one way of like hooking things together but you could also hook things together in terms of like an explicit causal sequence where it's just this to this to this that the ball kicked the shoe the boot kicked the ball the ball went down the ramp the mouse got that she like the whole Rube Goldberg thing that that could be part of it too but but actually more so if it's coherent and maybe agentic if it's related to basically i think we're going to have to get personhood into this in the nature of it um relative like the causal streams that you have the most familiarity with and that um fit into most of this uh meaning making heterarchy that has like a visceral uh thing uh a visceral ground a symbolic grounding the more of these you can come together probably the better it'll it'll persist and and the more that uh impurity uh will um give the sparkle so thanks adam robber and then we'll probably move to the next slide yeah uh this slide with the thought i was i was i wanted to say something about the parts yep if you still have something else than we do okay so first of course i would be curious how active inference people would kind of and what they would say what the thought is or what it means um in just philosophically i think that's a very interesting question um and there are certain kind of traditional ways how we think about thinking and it seems to me that active inference is challenging some of those very traditional assumptions but i'm not quite sure how exactly so the first assumption um that's the obvious way that it's kind of always how active inference challenges that but the first assumption is and the thinking is something which um is like a counterpart um to what's happening physically so the physical world and there's thinking which is i'm kind of opposed to it and i think the active inference that's there there this dichotomy is kind of erased um that's kind of obvious and second question is um there was a long um there's a long tradition in thinking about thinking which has to do or which which says that thinking is something which is rule-based or law like laws of thought um or in a more modern um in a more modern thing one would say oh it's about information processing or probabilistic reasoning whatever and but that seems to me to kind of clash at least with the phenomenological way or with the phenomenology of thinking in some sense so thinking seems to be something direct and unmediated and we talked about the now which is a particular example of that and i wasn't quite sure how that hangs together with the whole active inference um theory basically with the mathematics of it i mean if we as i would naively assume you talk about inferences processes which are happening um defined over some bounded system um so that seems to lend itself more to this information processing few of thinking but that seems to be somehow opposed to the phenomenology of thinking and i was wondering how you kind of bring these things together great questions one thought might be that the way that we've seen things like anxiety or affective states be addressed in active inference is with a parameter reflecting that and so it's almost like there's the core active inference model the one that you know data comes into and then there's this interpretive layer and so i might say at this point that the active inference framework doesn't make strong claims as to the you know what is a packet called thought or what is the nature of thought because as a scale-free framework first off there's the whole instrumentalism realism so it could be a totally defensible stance active inference says nothing about thought um or maybe somebody wants to unpack it in a way where they think it does but at the first pass the things that we've seen models of active inference about that people experience as thought like anxiety or excitement or something like that those are parameters that are named in a mathematical model and they don't seem to make strong claims on the nature of thought adam what do you think about that um so in terms of like i guess uh the good folk and and how they relate to thought um this notion of it being set in a way separable from action and distinguished from like this in the world uh just doing and just acting uh that seems like a like a a good place to to work with and so if we're talking about that it seems like it would relate to different species of sophisticated inference where you would be doing these kinds of counterfactual rollouts of what would be to pursue different policies or patterns of action selection and for me it would be to the extent that it's conscious it would have to be uh uh grounded out as some sort of embodied simulation i want to build everything out of action perception cycles and everything out of sensory motor things that i can ultimately that's going to be the grounding that'll be the experience that's not to say that thought would only be determined by what you simulated in your rollout for instance like you could have a lot of um yeah spreading semantic association of just like a kind of you know priming effect and then words come to mind that that's part of it and that's not necessarily like something you're controlling in terms of pursuing a policy that's just happening or some sort of um association you know again within terraception with just like a kind of resonance in your body that might play into and parameterize this process but the thinking it seems like it would be a mode of sophisticated inference of generating these counterfactual possibilities and using them for things like um different kinds of causal reasoning um different ways of um situating what you think uh is happening or could happen sometimes and that's how i would relate to thought i think and i don't know if the like what do we want from thought we want more and so that might help great question and do we want to experience the phenomenology of human thought which is an encultured phenomena or are we trying to talk about trees and all kinds of other things so everyone awesome thanks for the raised hand blue shawna robert i think shawna was first so i'll the first yeah go for it oh okay awesome yeah just a couple things um you know just like daniel was saying when we say thought right so i'm i'm in you know i you know if i don't think thought is like recognition of a truth and a proposition i'm of the mindset you know that uh i think kierkegaard said like all thought as a self repeating false sororities which yeah it's like so i'm not of the sense that thought is logic or anything like that because there's first or the logic second or the logic and it's just you know you can get uh like what is pro finite thought so i really think we need to actually when we say what our thoughts we need to say like what do we mean by thought right so robert and i have this model of like in awareness so it could be that the level of your awareness uh you know dictates the levels of time you have and then also what these thoughts are so you know someone can say oh rock has no thought but perhaps the the units of thought of the rock are just lower than our units of perception something like that so i think it's very strong to say that things do not have thought right so to ask like what a thought is it's interesting also to say what is not a thought you know so you know i have these um kind of you know interesting ideas like if you can model the brain as this you know pro hotel chief in a perfectoid space or just what this diamond is then if you can actually model neurons as you know these points in the diamonds these mathematical impurities that are actually morphisms of schemes and you would say that okay thoughts are these morphisms but we know thoughts are a little more messy than just satisfying strict associativity and up to conditions so the math can only give you a model of the thing because perhaps it's not strictly associative so i mean we all know that you do have a sodium potassium like channel like kind of neurotransmitter cluster network happening you always want it to be spring time in your head you would not want like a winter in your head where all the synapses and the trees die and things but nobody can tell you how the trees in your head sort of photosynthesize so i know after i gave that first talk someone asked you know well what are actually thoughts so again if these neurons are these geometric points and i still think the brain is some sort of non-local hologram sort of thing the its ability to rewire is profound we also don't want to get injured right because there is a detrimental effect um but so like what how do you sustain the sort of you know temporal non-locality inside this pro finite condition maybe robert and i are under this with this sort of in levels of awareness if you're able to achieve sort of simultaneous simultaneous states of awareness then perhaps you could probably get out of the idea of thought is the truth of a proposition and then maybe you could try to be probably figure what that is again i'm not convinced that we're in doing anything new um like what a new thought is i'm not really sure what that is um or if you can actually like you say scale free right so scale free would be able for us to make this in awareness sort of phenomena and um so i agree with robert i'm not sure if there is a contention there between the phenomenology of what is thought and then the sort of perception of like what thought is again really tricky to say you know well but real is getting new connections you know and then it's like okay what new connections in my same interface like not really sure how that is so i'm glad active inference allows for like new but i'm not sure if it's like radically new i'm not sure if any of us would be able to recognize something radically new you know um okay i'll stop there nice thank you more is different blue and then robert then adam so just so i was like gonna piggyback off of your point and get into that so and and then i have a question so really my background is neuroscience and so when you look at like the brain the fundamental unit of a thought like has to come from the action potential right like the sodium and potassium and whatever that is happening but like not one action potential makes a thought so so like you were talking about how do you measure emergence and and you know there's like the criticality criticality hypothesis right so like a certain number of of action potentials could lead to a thought perhaps but but how do you quantify emergence without a counter object right and so you said you kept saying this and so i'm gonna maybe ask you to clarify or push on this a little bit because a counter object like when i think of a counter i think about a counter in terms of computer programming for i you know from zero to 50 count count count count so is it like counting the number of action potentials or the number of ants leaving pheromone traces or the number of pheromones dropped right so so when you think about a counter object is that what you mean by a counter object or is there some fundamental thing that i'm missing i don't think i ever said the word counter object so can you tell me where i said that yeah so talking it was when you were quantifying emergence and i took some notes so you have built in emergence is built into the system versus having a counter so maybe not counter object that you said oh yeah it wasn't yeah it wasn't not an index right so i'm not i'm not of the why does i have to start at one you know so i don't think real numbers exist so that you're that's it's a much of equivalence classes so just like well 50 of these things happened i don't really know what that means right but i agree with you that how do you actually quantify emergence because right if you have too much hypersyncronous activity you have a seizure and a seizure is not a thought right everyone so it seems to be that there is and that's what chris fields and i and everyone are after is like what is this stop mechanism the stop mechanism that you can have you need a clustering happening in the neurons right to actually have thought thought doesn't seem to come out of your elbow but if the if the nervous system is non-local like i believe it is which is why i'm an advocate for the active inference of the you know embodied cognition if the neural system is non-local and nobody's proved that it's not then how do you actually say when the amount of clustering gives you something like look at all this our synapses are bright and perfect leaves and everything is like beautiful in here right and effortlessly how does it come how does thought come so effortless right sometimes and then but if you have too much firing then you don't get it and it's not like the you know there's a an actual rest in the action potential like blue what's the order of neuron firing what are you smaller than nanometers at that point between firing it has to be right it's not plank scale it's not like 10 to the negative 42 but it's small right and so with that kind of time happening i'm just i'm perplexed you have so many layers of time happening neuronal firing time like the stem cell firing time you know so um when i said counter i don't think i wasn't referring to like um an actual yet when i yeah well i i think i was saying counter as in most people just take time to do some kind of geiger counter and there's no emergence like at all you know hope i answered that and didn't make it worse yeah so i couldn't i couldn't figure out if you're like frank's talk about like a counterfactual like the opposite of emergence like what's the opposite of emergence or if you're talking about a counter like you know when you sum things up so that's what it would be a solution yeah yeah great um robert and adam you want to go to the next one or do you want to continue on the thought up to you well i i just had a very quick clarification of my question actually adam can reply immediately after that's good because adam said something like what do we want a theory of thought to be or what do we want it for which kind of thought are we are we talking or i when i was asking my question about the nature of thought i was it's mainly thinking about conscious thought um i mean there's this tradition that this which says something like oh the mind is whatever the brain does and then you talk about i don't know potentials about firing about calcium and i i don't know what and and and if you then say okay well that's a very abstract thing and you call that thought i think that's fine that's fine and wanted that it doesn't really um give you a good answer to what you mean with conscious thought and the standard way how psychologists normally answer that question is to say okay well there is this unconscious processing happening and then something magically happens and that turns into conscious thought and um i wasn't quite clear whether predictive processing or or active inference or whatever frame you want to take really answers that question or whether the whether it says oh it's actually not a good question um kind of dissolves this question and well that's a bit more specific about my question adam go for it to try to try to tie a few things together um so yeah so what we want from thought and what our sensibilities might be that that'll what boundaries we established for where we find it and don't that'll be relative to what what do we want but to try to like build up um so i think there is a sense in what you could say like so like a rock is a mind or is thought even though it's of an extremely dumb variety it's action uh because we can point to it it's a kind of density um and that it's you can think of it as you know it's just gonna probably unlike like a crystal which can grow it'll just sort of like it can do one thing which is steadily degrade over time on some kind of entropic gradient but to the extent it doesn't that's that's all that's the intelligence that's got but then we get to systems that can like actually um through sophisticated dynamics uh start to do things like uh uh suss out different things that can happen and do a kind of um a more flexible policy selection and they might be potentially in in order to do this they might have to be critically organized and that that's the only thing that could allow for generalized evolution or generalized uh Bayesian model selection that you actually have to be poise at this like interregime in order to have the right properties in order to have um both uh the variation where you can suss out the possibilities but also the the stability um you're not pouring over into chaos or disorder uh that you can actually build structure upon structure so it seems like that's going to be critical um in terms of like if we're talking about something like and it's like so we're talking about something like you know a life form maybe that's like some people say like that's kind of like like this sort of more sophisticated mentality that um with um you know optionality maybe that's like the hallmark of life maybe this sort of hierarchical nested modeling if some of these like inner loop processes are capable of bifurcating in different ways and giving you a kind of like intelligent optionality uh maybe you know that's a kind of thought but let's keep working up um so if we're talking about something like the brain I would think of like a a hierarchy of uh nested coupling processes evolving on different time scales and uh in the community they'll tend to call these things a non-equilibrium study state densities I call them self-organizing harmonic modes and it's a way I interpret um neural synchrony to the degree that you get it it's facilitating communication through coherence making sure that within these systems not just like tuning and excitation inhibition balance to and then it works but that to actually within this maelstrom to get um coherent dynamics coherent inference uh you would you you need the neurons need to team up so you need synchrony you need synchrony for and um I guess what they would call a marginal message passing so it would be like marginalization like a joint belief and then you pack it your beliefs and these like coalitions are doing like like these units this quantization of sense making of prediction error and prediction and so there would be like this nested hierarchy and so if we're talking about something like uh let's say like gamma frequency that might be like a very small local unit of like synchrony and sense making and maybe that's like a quantization of like prediction error that will be passed upwards so that's like an observation within the scope of this gamma packet though maybe that's like a prediction for like whatever is inside it but you keep going up so then maybe that's something like beta and then you know so gamma like 40 times a second and then beta would be something like uh you know maybe high and low or to use the terms like somewhere between 13 to 30 times a second and maybe you're starting to get like uh but you these higher order attractors and these higher order predictions like like uh like a composition like something like a hand reaching something like that but like where's consciousness going to come into this and why it seems if you can take these different um recognition the these these recognition of different patterns and you can stitch them together such that you have within the organizing principle of I would suggest an egocentric perspective of a particular point of view and you can arrange them in some kind of space what kind of coherence how much that's a good question some kind of time some kind of causal relation well now this inference you you might start to say well now you might be inferring a world specifically a structured world model with spatial temporal and causal properties where things can unfold and you can have something like a stream of experience if this uh synchronization manifold is basically giving you a joint belief over your sensorium and so it seems like alpha is a frequency where you can get a big enough of a agreement of a negotiated agreement on a on a on a on a harmonic on on a collective sense making that you can bring together all the different embodied causes tagged by an egocentric reference frame in terms of anatomy and like what would be a scope and something that might say that's implicit is there's this inverse relationship there seems between the frequency at which you can get oscillations or harmonics and the scope of what can be synchronized so you can get like a little thing to agree really quickly but to get a big thing it's going to necessarily take more time and so more things can come into the mix and consciousness I'm wondering if that is basically at a scope where you're doing these joint estimates of system world states that lets you both inform and be informed by evolving action perception cycles that actually the the the functional closure that you achieve for this estimation process is on a proportion of the events in the world that are relevant to your affordances that actually would be part of your policy selection and that if you can get this kind of alignment of temporal dynamics and if you're having the these things brought together in a way that gives you a point of view of some kind when I might say okay now it's something like conscious thought this stream of experience doing different things where but the thinking part of it we might want to save that or not for something like imaginative planning or some sort of like kind of like like you're you're thinking of something other than what is and that's actually what's happening at this intermediate level where you're conscious but you're sussing out different possibilities as you're trying to situate something in some sort of coherent spatial temporal unfolding that you can make sense of and so the idea would be like everything that we can think of we're necessarily temporalizing and spatializing just so we can rocket like rocket I mean literally so we can bring into some sort of relation where this intermediate level embodied simulation otherwise we can't think of anything but I'd say there could be a kind of unconscious thought also this actually might relate to something like a counter I think you get one counter in that this bundling if there's a certain timescale which you can get a big enough of a coalition there's a time for achieving that in some ways that's that's a kind of counter that's a kind of like discretization and you might think of this in like in active inference terms is like kind of discrete updating of the generative model the but it seems like you can get something like unconscious thought potentially via the hippocampal system kind of loop around the cloud wearing again and that one way I would think of thought is as generalized or navigation through generalized space hitting a particular not arbitrary target being able to find your way there and so I've been working with these roboticists who are building these slam systems they're means a simultaneous localization and mapping and so what these robots are doing they're trying to figure that where am I in this environment what is the sense of this environment and they're doing this actively in terms of they're they're moving around you know scouting out what's happening and there's this like mutually concerning inference of what is this world I mean and where am I and what is this world and where am I and figuring this out but this core system would seem to have within it these these place fields these like locations that could be in physical space like the trees there I'm here my kitchen's down there but it could also be within a conceptual space and so they're sometimes modeled as like bump attractors where like you get these like local recurrences within this like centrally located thing and basically this chaining of these bump attractors would then chain together broader dynamics for the rest of the brain and so in theory this could operate in a way where you're not aware of it it's like coordinating sense-making and a kind of I wouldn't say this is a propositional but in like a an orderly way there's like a a topology to it and you went from here to here to here to here so in theory that could bypass consciousness I don't know if it can or not but this in terms of like this nest of graphs of graphs of graphs there could also be a kind of holography there potentially in terms of well not holography um non-locality let me say non-holography non-locality there in terms of when you coarse-screen at some level of coarse-scraining locality falls away I'm not sure about that but uh yeah so there's some thoughts on thought but one more thing though so ultimately I think this is going to be crucial understanding the principles underlying this system not this system in a narrow sense but of this graphs of graphs of graphs this this where these interloop processes have these different capabilities it will be the secret to advanced intelligence understand like what do we want from thought and cognition and I also suspect it will be the answer to Schrodinger's question and that what would make I think life clever and more than a world pool like you think so dissipative systems they can be extremely impressive extremely life-like but this sort of interloop process that can give you this sort of optionality that this potentially a critically organized interloop process that can bind together overall dynamics and give the overall system because of this bifurcating potential this sort of optionality that's going to be important now what if then this interloop system is also a kind of counter or a kind of like thing that stitches itself together where basically like genetic inheritance like it gives you over over time the ability to basically get better priors into it better it lets you it lets you transfer learn it lets you what happens here and influence the next thing and influence the next thing and so this could be a way of what we you could think of Treblanka I think I'm ruining the name but like this unlimited associate of learning as like a hallmark of consciousness this kind of like transition something like that though this unlimited learning open-ended learning as phylogeny as basically an interloop process capable of coordinating the overall system which has some sort by some sort of like pointer relations some sort of semantic pointers semantics which itself is allows for kind of heredity a kind of a kind of inheritance of whatever happened and so I think we bring in those two things together an inner loop process capable of some kind of counterfactual processing and link that to inheritance now we'll have what we need to have either life or a potentially conscious system thank you Adam I flip to the next slide on diamonds and holographs where does this come into play it's not something that you'd find in the active inference literature today but maybe it will be found there tomorrow why calling on our diamond experts yeah yeah are we bypassing the dichronics and chronic either yep we're wherever you want to go on this slide yes okay yeah so um you know so again if we're if you know like Robert is saying okay it is the mind what the brain does like what is this what do we want from font when you're when we're generating you know in the like pro adversarial kind of or in that you know generative adversarial network you have this generator talking to you know you know the you know like itself saying no no no no no this is a cat this is not a cat and it's sort of this ai is talking to itself what what is that in us when we're sort of talking to our self so again you have someone like me in the category theory side saying okay I do think there is some kind of strong duality between the agent and the environment so if you if if minds are modeled in this diamond way right then what you're actually doing is just sort of engaging with your reflections right how do you actually engage with your reflections do your reflections have your reflections so kind of based on what Adam was saying it's like what's the role of priors in holography what does it actually mean to sort of have um I think that that was my original question when an agent is figuring out how to jump over a wall it's never seen before how is it doing that what it if affordances are just a set of parameters that are built into the thing right so let's try to do this then if you could do this parameter free what would that model actually look like so for someone like me I am fascinated how when I speak to you the speech doesn't last doesn't last doesn't last doesn't last but somehow you remember what I say you remember what I say you must probably construct a holographic holographic copy of me and what I'm saying so that you store that somehow so someone like me with almost like a total recall can remember stuff like this I know I can talk to people and I have seven different versions of time going on you know how do you allied like all of those seven and you're just focusing on like on what's happening now so you aren't really so you have a crazy feedback processing going on and none of this is happening in the real time right because there's no real transmitters happening so this is already the past all of this whatever that means when we say the past and so I'm just wondering with the when you have a very strict set of priors governing how your agent behaves where do those come into play and how do those work if the very way that the agent operates is through holography so again holography it's like you have a you know some kind of dimensional relation going on that perhaps now we have a 2D retina for a 3D creature 2D retinas taking in information and compressing it and then reconstructing it in a higher way so that's just for that I have this model of you know instead of just having the normal holographic principle which relates anti-decider space to the thermal field theory I'm trying to get it in sort of a diamond way where you actually use the italical homology of diamonds which would be these functors and build up a holography from an image functor of sheaves so it's purely mathematical let's just build up a holographic space from you know image functors which is really relevant to an active inference because you're acting you're acting you're acting or acting there's no other definition than a functor than some kind of relation so building up holography from relations and the conformal aspect which is the metric free sort of gravity free space would be in the profinite condition of the diamonds so you have a six functor formalism which is which is replacing anti-decider space you have a profinite condition which is replacing the gravity free so that seems to be me like a very interesting way to figure out how an agent is navigating if you know there are reflections so you know the math part aside just what does you know the active inference experts here think about engaging with your reflections that it's never really anything new you're just sort of walking around in your echo space thanks for the comment there Adam and then anyone else with the raised hand so this might not dock well let's try in terms of like the nature of like how would I remember Shanna or how can I know anyone or anything I'm wondering if it always has to be through some kind of generalized mirroring with your own embodied experience and ultimately it's got to be cashed out that way and that this the particular nature of like the the holographic in terms of like the ability to like reconstitute and reducing the dimensions and then expanding them again would be by virtue of you having this system which would just through a variety of reasons be able to generate intelligent coupling with the world of a sensory motor variety but also run this same process in an offline mode so that you can imagine body simulation and that if you then so this so the inactive inference this the sensory motor processes are understood as these self realizing predictions where you'll set these equilibrium points and then this hierarchy will just sort of reconfigure and unfold and as it's jostling around and passing these implicit predictions and prediction errors around it just dynamically or just it'll by Hamilton's principle of least action as as like water flowing downhill it'll find this minimal state of minimal prediction error that it's ultimately cash out and reflex arcs in the world but for thought would be the virtualization of that so you're imagining this pattern of reaching out so you have this construction system in this virtual reality system and then if there's a reliable patterns of coherent state transitions that I'm likely to do in different circumstances well now I'm starting to get like an agent model of myself a little bit of a kind and you're starting to get if there's a reliable set of policies that will happen in different circumstances it's like okay we're not itself reflective though it seems you're gonna have to objectify the self somehow and I'm thinking this likely comes through mirroring actually through relationships between parents and children at first and that you're and where the parents seem to be actually soliciting some people say that the mirroring is innate I actually don't think it is it seems like parents help to solicit it but they mirror with you so that you learn to mirror and so that creates the learning curriculum but through this these games played between caretaker and child you learn to get this mapping between what's out there and what's here somewhere along the way you'll learn to basically associate what other people are doing in any ways even if you're not like you'll learn just to mirror with them they're over there and then you see that and you draw this analogy to your own embodiment your own you put yourself in the same pose the same situation and then this now gives you a basis for a kind of objectified selfhood it's still always from like this egocentric perspective but now you're able to like get this other person and i'm thinking that that's like this kind of interface for like my shana model would have to be like the the the docking point would be something like this this third person objectified selfhood which then would let you cash it out or unpack it in embodied coherent embodied simulation something like that one thought about like a GAN setup and discriminators it seems like that that might be again crucial for both hard problems and Schrodinger questions I think there's this one thinker he's this academic psychiatrist a record did you plan on having him on Daniel or he wrote this one paper he's fantastic actually I think that'd be a good idea so but it's this idea of like so a system uh basically he calls it like allocetic overload and a collapse model where basically through these systems they have these functional bottlenecks these bottlenecks will tend to kind of that are helping to establish the synchronization manifold for over a system of the coordination that these heavily trafficked and contingent areas will tend to fail first when you stress the system too much and so and this would give you like for instance a kind of flexible hierarchy where either you're doing things maybe like unconsciously but just like fast feedback loops of the environment you're not getting fancy you know quicken the dead or you're letting yourself build models upon models reflection upon reflection but that these more rarefied things they'll they'll fail first and so you got this like you can move up and down in sophistication but I'm wondering whether in terms of and so this would apply not just to all intelligent all interesting intelligent systems is the claim but while you have this like natural kind of adjustment of your sophistication I'm wondering if it could be good to have something that preemptively anticipates this so you're not just having reactive regime collapses or it's actually you can actually like you know maybe you know yeah I could switch to an unconscious like berserker mode right now maybe I don't want to be in that position and so if I can get something that's like giving me a read on my degree of like stress of allostatic load of the degree of like how how close am I to the edge that could be a good thing and so for the brain this would appear to be largely the interior single it would be probably the main free energy or prediction error integrator it's centrally located it has like in every sense it's like boom got the most connectivity it's just perfectly suited to say like how well are you doing in self evidencing I'm wondering like within cells if you have a similar kind of GAN like discriminator some sort of like inner metabolic cycle that like accumulates and like maybe like mitochondria accumulating like radicals or like some sort of like gene expression loop I don't know but something like this that's highly central that like when it starts to go this would then influence your policy selection this would then influence your orchestration I don't know what that would look like in a cell but for the brain this appears to be there's does appear to be some fairly centralized structures literally that allow for that kind of GAN like function and that is key to any to basically intelligent agency and is also key for things potentially like I don't know if I believe this actually so as Sam Gershman has argued this is like reality monitoring like a higher order thought sense that I think something like that could work but the idea is that you know how do I distinguish if I just imagined something or if I actually experienced it and that there is some sort of like GAN like discriminator he treats the whole frontal lobes like this that's something else but it seems like this though GAN framework though this idea of discriminators is extremely flexible extremely powerful and will apply across a broad range of systems and I think is actually really essential so yeah Thanks Adam to bring it back to act infant the diamond holograph I kind of drew two diamond shapes on my page one was person A and person B and their shared informational niche they're both looking at a whiteboard and so there's two people communicating to the shared screen and unpacking and that was something we talked about with Chris Fields and then to relate it to this diachronic through time and synchronic at the same time I kind of thought about like a bow tie or two diamonds where in the current moment is the agent now and then there's the agent in the future at a inferred trajectory point and then also in the past so we're sort of like retro dicting we're now casting and we're also predicting the future and maybe the pathway through the past present future it's like a narrative that helps the organism make sense of where it is at that time any other points on diamonds or I think we can kind of go to the future and give some okay awesome our closing thoughts and you know anyone who's watching live can add a thought or we can go around here as well what is the future of active inference or a future or a direction where do we see this field in the future 10 years from now or some other time point out will we be adapting active inference to high level unsupervised learning will we merge with strong artificial intelligence and where do we see embodied cognition scaling to or applying to next blue and then Robert and then anyone else Robert why don't you go ahead yeah well sure thanks so just my personal opinion it would be very interesting to see to what extent active inference would or could move from I think now a lot of discussions are kind of mechanistic I mean complicated mechanisms which are happening in the brain or in a cell or wherever I would be very interested whether that whether active inference could be a way of precisely speaking about norms normativity I think someone mentioned at the very beginning about about said something about normativity what I mean with that is the distinction between inferring what is the case or focusing on what is the case what physical stuff is going on and what should go on in philosophy there is this famous distinction that one could not be derived from the other it goes back to David Hume the odd from is it's a problem and I was wondering whether active inference could be could be an interesting middle way here and the second related thought is which has to do with AI and reinforcement learning we mentioned that in the very beginning that seems to me to be a good I mean one could be very critical and could say well in some sense active inference that's very nice and interesting and one could do a nice biology with it but it's not so obvious what the benefit would be compared to reinforcement paradigms and if one would be able to say something reasonable to this normativity question then I think that's very that would be a very big big plus point for active inference but that's why I hope that I kind of see in the future work which goes more to that direction as well so thank you Robert Lou so Robert just to kind of respond to that I don't know if you've seen the scaling active inference paper that we looked at it's been a while but it did yeah number eight who is the first author I forget Alex Chance oh Alex Chance of course so but he definitely compared active inference and a computational paradigm to reinforcement learning so I don't know if you've seen that work and you know in different testing different paradigms the hopper and the car and the hills and it was a nice paper anyway so where I would really like to see active inference in 10 years I mean we talked about how active inference is scale free a scale free model so theoretically it should also be multi-scale I think that there's a little bit we have a little bit of work to do to get it to be multi-scale so and I think that that work definitely centers around emergence when does the next scale emerge when does a collection of cells become a tissue when does the collection of tissues become an organ et cetera so I would really like to see active inference incorporate the bi-directional information flow that was talked about in crack hours paper the information theory of individuality I would like to see that merge with active inference and give us a way to talk about how the cells in my toe are constitute me in the society that I live in right so nice Shauna then Adam yeah blue that's great I totally agree and support you there I am you know obviously a heavy advocate of mirror phenomena so while there may not be mirror neurons I mean which I hope there are there is a mirror function so I'm an advocate not just for reinforcement but for imitated learning so what I would like to see active inference move towards is a new mathematical you know adding to the mathematical framework just some new notions so in that we have a master and I talked about condensed sets that are p which are Pierre Schultz and Dilsen and Crossens like idea of sheeps over points so if we could actually get time to be modeled as singular which I think sounds catastrophic but it might be more apropos to what's happening here instead of assuming it's continuous or a fluid that you move through whatever that is so if you can read brain things and condense sets this would actually help us incorporate this mirror phenomena so you know Adam and I really think the GAN network is going to be really nice so I have this idea of constructing this pro generative adversarial network and and having a perceptual that's an actual V stack so V stack is just a higher notion of the diamond so whereas you take you know you glue together certain factored spaces and you get a diamond you can also gather diamonds and get this V stack so I'm interested in this phenomenon but if we can actually take a general adversarial network and make a pro version of it so then this would become like a a certain pro object and the category of all possible generative adversarial networks so the goal here is to categorify embodiment for AI which I think is very strong so then you can model the perceptron as this V stack perceptron so that the output is hopefully some kind of condensed set it's not just a function anymore it's going to be either like a sheaf of things or it's going to be a whole bunch of categories of something so this this pro formalism would also be a model of embodied meta learning so I'm an advocate through this mirror formula formalism to get meta learning going which is going to reframe the frame problem in terms of condensed sets or category so I think the frame problem is huge and to categorify would be actually be like really nice so if you can actually model inactive robots as advanced AI in this kind of pro finite formalism and you could model mirror neurons as these imitative this play of imitation emulation GANs in a sort of mirror game theory for inactive neurons this is what I think active inference formalism can actually do you could actually construct a correspondence between embodied cognition in this pro in like my pro finite formalism and computability and sort of pro version of the synchronic and diachronic emergence but have that already built in to the pro GAN so if like diachronic and synchronic emergence could be built in as affordances I think that's really nice because I think active inference needs to have something like there's something about the synchronic emergence about it being sort of higher level phenomena being you know you know supervening you know on like these you know subvenient neural structures to get this vertical emergence but then you also have this sort of horizontal emergence where it's like wait you know the novel property that emerged over time like existed prior to emergent so I think emergent is going to be really beautiful and so I think if we can incorporate the GAN a pro GAN some kind of condensed set and a higher like scaling up when we say scale free let's also not be afraid to scale things like the perceptron and affordances and inactive robotics thanks when you're talking about the generative adversarial neural network I wondered if there was like a collaborative one just two networks that were teaching each other helpful examples and then when you pull back a level even if it's framed in an adversarial context between two models it's actually being done in a collaborative sense so that kind of is almost that multi-scale emergence right there yes pretty let's talk about the category of the GAN it's like let's say GAN is talking to each other that's what I want awesome Daniel Adam well with respect to what you were just saying Daniel I think that's actually a it's actually a really good model of play in terms of this like ultimately benevolent adversarial attempted adversarily attacking yourself but yeah it was a future directions many things I'm not thinking of but the things that would particularly interest me would be getting actually meta-learning that would be really quite key and different forms of meta-learning the ability to build up the sophistication and to have across episodes in context actually making sure that what's inferred and learned there will update so it seems to me that to help with that maybe more cross-talk with machine learning that could be useful and more cross-talk with people like Shana who can give us like the precision of knowing which mappings are legit and what and what are our mappings actually doing so for instance like I have notions that for instance you could think of the brain as a whole as a kind of like vague out of sorts where it's like you have the very variational autoencoder doing this dimensionality reduction getting into this latent space this latent space has a kind of graph structure like a graph neural network that has like geometric deep learning happening within it and then on top of this like the hippocampal system would give you like a kind of higher order semantics on top of that but is that legit and what's the relation and from there like there's going to be particular operations happening some of which would correspond potentially to things like experience so is your experience a kind of discrete updating of like snapshots or is it actually a continuous flow what are like is it like for instance within like the association cortex like the latent space is just like fast message passing there or is it like the enslavement of the whole hierarchy so the question is like what computational objects are we dealing with seems like we're going to need crosstalk with people who have expertise and things like category theory and machine learning to actually understand those well and I think if we bring those together a lot of the debates might end up being resolved like we can have for instance both instrumentism and realism we can then at certain points start to have enough constraints so we can work our way towards things like applying sophisticated inference to yourself in ways that like I don't want to criticize active inference but it does seem like sometimes a failure mode is the power of the framework can result in kind of the same failures good old fashioned cognitive science of like don't bother me with the plumbing functionalism and ideally if we can go across the more perspectives we can bring to bear on something going across implementation algorithmic and computational and cross reference them and bring in phenomenology too that would be really helpful and so sometimes it seems like we're a little bit anemic on the implementation we're squinting a bit more than we might realize and so how do we beef up the algorithms so ultimately though for me the end game is I think I want to bring active inference to explain consciousness I think it's a major transition in evolution it's the elephant in the room it is the room by and large it's not the only thing going on but you know we should not underestimate its power no matter how much it's fashionable to denigrate it things like the nature of selfhood as constructed process and getting metacognition out of that and not just reducing metacognition to a kind of hyper parameter but actually as a process with particular epidemics that are realized in particular ways constructively of that kind and then ultimately what I would like so I think active inference will be capable of making major inroads into answering Schrodinger's question more than we've done in the past I think we might even have a definitive answer and maybe in the not too distant future I think we might have a definitive answer potentially or not to the hard problem and ultimately I want free will to be a part of it I want and by free will I know there's a lot of ways that that's word is used but actually use it I like refer back to the language we want the the degrees of freedom and the directed drive we want this optionality this empowerment of our options being open but we want those options to be able to steer in particular ways and this like free will is like what meaningful power is what it's what enables meaning it's what enables open ended evolution of a particular directed kind where we're actually getting teleology we're actually having you know the things we care about and the things that actually move the world and you know that that would last let us get to the moon you know bombs that leveled cities all these things that the future will depend on free will is going to be part of that story precisely operationalizing what we mean you could just say agency that's fine but that's what we're going to need and so something like and and potentially again framework would be part of that how well am I doing precisely narrowing down the micro mechanics of agency thanks that's my thanks adam I would love to see and expect and will enact policy to improve communication and education you're right there's so much more to be done it's worthwhile to criticize or to see where it could improve because those are the affordances for a contribution so it would be awesome to see everything that people just mentioned arise as well as to include a lot of voices in the conversation that maybe know about active inference today but don't feel like they're in a position to contribute even beginners especially contribute through their questions and for those who have disciplinary expertise or who want to be a new kind of generalist for this new way of doing science I hope they'll find a home an active inference as well so fun times the math stream is always a wild ride we really appreciate it it's an awesome convo I hope that people will stay in touch if you're watching in replay leave a comment and the lab will still be there and for all of you here just thanks so much for participating and each of you would be awesome to give a solo or have a conversation to explore some of these ideas so it's great times everyone bye