 some questions that we had from the dot zero, but also new questions arising. So we'll just start with the introduction and saying hello, and then it'll be awesome to hear the author's initial contextualization. So I'm Daniel, I'm a researcher in California, and as explored a little bit in the dot zero with SID, there's a few places that we were super curious about ranging from how does one go from a phenomena that they want to model like navigation to the analytical equations, and how is that related to active inference and the free energy principle, and then how do we go from that analytical formulation to the actual gigabyte of video data and the real-time sensing in action. So a lot of exciting topics, and I'll pass to Blu. Hi, I'm Blu, I'm a researcher in New Mexico, and this paper was kind of way over my head for a lot of reasons. Just so far out of what I normally do, but I have some questions related to the hierarchical modeling and how that is kind of used and how that can be related or cannot be related to maybe biological systems that model in a hierarchical way, similarly. And I'll pass it off to Adam, I guess, if you want to start or introduce yourself, and then you guys can take it away. Hi, I'm Adam, I'm a systems neuroscientist who's currently a research fellow at the Johns Hopkins Center for Psychedelic and Consciousness Research. I've been collaborating with Tim and Ozen and others on trying to understand the natures of navigation and the ways in which the hippocampal and entorhinal system might be relevant and the ways in which this could be used as a means of understanding thought more generally. And it's been awesome and slightly intimidating except Tim is awesome, so that's made it less intimidating. So that brings it to me, so I'm Tim. I'm a computer scientist, a photo schist at Gantt University. We have him subscribe to the active influence formalism to try to make a robot slightly more intelligent. And this paper was, I think, one of our major accomplishments last year, it was a collaboration with Adam where we basically look at the problem of navigation from different angles ranging from active inference modeling to an actual implementation on a real robot. And at the same time looking at how could this work in an actual brain or what other relations and the solution is there. So it's really exciting to have this paper out, but also this was just getting started, I'd say. It was basically our first artifact that we published. We have another one together with Adam in the works, and yeah, I'm pretty sure that we will continue this endeavor for quite some time around it. So we're all just only just getting started. So that being said, probably a lot of the questions are also still questions for us. So we'll see to what extent we can give an answer as of today already. Awesome. Well, we can start with this sort of triad that is highlighted in the paper. And Adam, you said it as understanding the basis of navigation. And we have these three legs to the stool, kind of three threads that intertwine in the paper with an analytical formulation of navigation, which is what Blue mentioned with a hierarchical generative model and the implementation in robotic and in biological systems. So maybe a first question is, how has this intersection of modeling navigation with robotics and biology been conceptualized and what brought you to apply active inference in this way and what does it add? We should go first. So maybe my perspective here is that we basically got started from a robotics perspective, like looking at the navigation problem and looking at the ways that it's being solved right now and implemented right now. And at the same time, we already had quite some experience with active inference, but more like, yeah, simpler problems, let's say, our first experiments were more on these kind of mountain car environment or a simulated car racer. But it was always our goal to kind of try to scale it up to, yeah, to real-world robots. And so we got started by drawing and seeing parallels between current implementations out there that seem to match with how we would look at it from an active inference perspective and kind of work our way from both ways. So on the one hand, try to get some implementation running and on the other hand, seeing how we could formalize it and actually make the math work in the active inference framework. And what we found out was that actually the best matching implementation from a robotics perspective was actually one that was quite bio-inspired. So it was basically the Red Slam model from Milford at all, where they had these heavy inspiration from grid cells, place cells, and these things. And yeah, of course, it was not really a surprise given the fact how active inference is also grounded in neuroscience. And so that's where we reached out to Adam to basically show off, like, okay, this is our trajectory, this is our plan, this is our current implementation and how we think the math could work. And then Adam with his infinite knowledge on how it could work in the rain, he gave like all kinds of pointers, like, okay, but this is interesting because what you implemented here is actually looking a lot, a lot like this kind of system and what you're doing here is actually very much like these kind of things. And then we just said, okay, let's dig in this further and we basically started on curving out, ping-ponging, like, okay, if this is how it's done in the brain, then maybe we can make something more similar in the math or in the implementation vice versa. And the paper is basically a first step where we had, like, the three bits kind of in place, but we know where the open issues are, but it actually has something working on and sensible on all three levels, I'd say. And that's where we are at at this moment. Awesome, thank you, Adam. Anything you add to that? I guess what I would add is I'm increasingly compelled by the idea that robotics as a touchstone and basis for understanding different aspects of mind is, like, it might be the most powerful kind of grounded empiricism we can do. And I think it's both because it's like, a lot of times like that Feynman quote of like, what I cannot build, I cannot understand as I mentioned, but this is actually building systems that have to work. And that these problems, like in going through them and looking at the different solutions, like the kind of understanding you get, I think is really unique. But also, I think it's particularly apps because of what our minds other than control, cybernetic control systems for bodies that have to move through the world. And so this idea that like the way roboticists are approaching it, like it's, and that is the exact problem that nature had to solve, but natural selection had to solve. And now the question is, was it the same? It might be in some cases, as you'd expect, within bounds, nature to be a fairly clever, blind designer in terms of what it might discover. And so just going back and forth between the robotics perspective and the how it might work in biology, and then this abstract ways that they're, the abstract algorithmic implementations of them for the robots, and seeing how those can mutually inform each other, it's, for me, it's a, this is how I basically want like all my thinking about the mind to be from now on. Like I feel like this is just like, this is how it has to, this is the most powerful way of thinking on it. That back and forth. Let's enter in the biological door or blue, do you have a question? Okay. So Adam, what, even though of course it's a deep area and there's writing in a paper and citations and so on, what is so interesting about the structuring of the grid cells through space and time and what can we learn from that about effective ways of navigating? In latent SLAM, we have these pose cells. And actually I think it would probably be better for Tim to talk about that, like the exact ways in which these cells function within the robotic system. I think that actually brings more clarity than the more mysterious, like, so there's lots of ideas being going around in like computational neuroscience of like what the grid cells might do. But I actually think the greatest clarity comes from like, well, how would the analogs work in the robotic system? So I'm gonna kick that to Tim. What do you think, Tim? Yeah. So I think that's in, in our current implementation, the parallel with grid cell, this kind of part wired in the implementation, it's kind of by design, you have something that looks, if you squint your eyes a lot, it looks a bit like grid cell. I wouldn't say that we actually have grid cells in here because for me, the grid cell is basically, it should be an abstract, it should be giving you a way of modeling movement in an abstract space, basically. And in the case of navigation, the space by accident maps to the real world. And the grid selectivation is just a peculiar way of encoding where you are in the space without just giving you the X, Y coordinates, basically, with a more kind of higher dimensional, more robust representation of where are you on this grid and how do you, if I now move, how does your location in this reference frame or does the grid change? And so in our implementation, we basically just said, okay, we know that we are in a roughly flat, surfaced world. So the main coordinates that we really need to track and integrate over are the X, Y position and the head direction. These are basically the main things. And then we use this thing called the continuous attractor network to kind of represent these three coordinates. And we basically re-inject, the knowledge that if you move forward, then you will also move forward to be in this attractor network location. And roughly this corresponds to doing kind of part integration and I'm pretty sure that grid cells are giving you similar functionality in the brain, but I would not say that our current implementation actually has grid cells. I think the parallel is more with play cells because there we do have kind of, we do learn representations from visual sensory inputs that kind of give you an estimate of, this is a particular location. And if you see a similar pattern, then it is likely that you are visiting the same location again. And at the same time, you happen to have a very similar X, Y and head direction as in your previous time you kind of saw something similar like that. Then you have pretty good evidence that you are actually in the same spot. And then this is what we then call a loop closure and then you kind of reorient yourself as if you were at the same location. So that's kind of how I would think of grid cells and play cells in our current group. Thank you. Blu, do you have any questions? Yes, please go for it. So I'm curious as to the role, if any or how the system may change, if you included proprio reception, which is like the sense of where your body is in space. And so like this is partially modeled through the visual system at play cells, grid cells, et cetera, but it's not always, right? So it includes other senses like balance and where is my hand and these kinds of things. And I'm just curious if your model includes that in any way or only or not, or how it may change if it did, give it kind of like a more embodied. Yeah, so basically in our case, proprio reception is limited to basically the velocity commands that you issue to the robot. It also observes these again. So it assumes that it knows which command sending and then these are also kind of fed back to the model as kind of a sensor. So if you say drive forward or turn left or turn right with these kind of angular velocities, this information is also sent into the model and this is then what we call the action. So everywhere in the, especially in the low level model, you see the action pop up. This is basically, you can see this as kind of a proprioception thing. And so everything kind of predicts what will happen given what I sense that I was doing basically. And we do it both for the visual sensor, like what would I see if I'm turning left or right or go forward, but also for tracking the pose and doing the pattern integration, like if I move forward and my heading was a zero angle, then I will probably just move forward in the X direction of the camp. And that introduction of counterfactuals and anticipation, what would happen if I did this? It's very interesting area and also it speaks to the composability and the flexibility of the active inference model. Like Sid highlighted that only the visual data were being used here with RGB values of video, but there are other kinds of sensors that could be used as another sense state and that would kind of increase the dimensionality of the model, maybe it is a value add, maybe it's not, but it doesn't require rewriting. It's like more like plugging in a different hard drive than it is like having a different computer. So you mentioned a higher dimensional, robust representation of location. And that's definitely one of the advantages and aspects of the model that'll be important to explore more. Like in the dot zero, we explored this paper, are we ready for service robots that showed how sometimes even trivial changes to environments can result in the robot getting lost because it has referenced to something like a chair that's no longer at the same location. So how does this model provide a higher dimensional, robust representation of location? And how did you go from thinking about that to the actual way that the generative model is framed? That's the first question. And then the second part is gonna be about how do you go from the framing to the actual computational implementation? But just the first stage of this pipeline, how do we go from wishing upon a star that we could have biologically inspired resilient navigation to writing down the formalisms as you had them? Yeah, so to get back at your first, at the problems that you mentioned on the service robot like, so basically the major problem in robotics slam is that indeed if your environment changes slightly a bit then the same location doesn't look the same anymore in whatever sensor modality you're using. And that can be problematic, especially if you want to model the environment as like a metric system where you basically define, this is my origin. So this is zero zero in the coordinate system. And now I will, for every obstacle or thing that I approach, I will log my estimation of the X, Y coordinates. And then suddenly when your chair moves, of course, yeah, everything breaks. But here in this system, we actually get rid of being grounded in a metric space. The only thing that matters is that this location looks very similar to something I visited before. It also seems likely given the bad integrated trajectory I assume I was following. So this might be the same spot. And if you think that, or if the chair moves in the place, yeah, that's fine. Then you just say, okay, maybe it's a different spot. It doesn't really matter. At some points, you will have a kind of a queue that is stable, right? So, and that's actually what we also do as humans, that you want to navigate. You just look for the queues that you are, that you know that are pretty stable. If you visit a new town, you will look for the church tower because you know it will not move. You will not orient yourself based on the color of the cars that are parked somewhere. So I think by using the active inference principle as like starting point, you can get rid of all, a lot of these problems because your agent will automatically look for the stable queues. And if suddenly the environment changes but just accept this and just model it as like, okay, I think I'm revisiting this same trajectory but now it looks different. So maybe I just have to model it as such as this is kind of a trajectory that can change often. And maybe this imply that if you plan the next time, you'd rather prefer a different trajectory where you know that everything is stable, right? So this is the same as reducing your ambiguity. So I think that's getting rid of everything should be structured in this X, Y location in a metric space. And else I don't know what's going on just by getting rid of that and just assume anything can happen, anything can change. And if it does not change, I prefer it. I like it better. This helps a lot to circumvent a lot of the traditional problems, I'd say. Adam, anything to add on that? I mean, so one of the things that we've been working on is so like in this paper, it's like trying to say like, okay, so to what degree are the principles of what we understand about like the natures of like the hippocampal entorhinal system being reflected in this simultaneous localization and mapping problem in latent slam. And some of the other work we're doing is also going in the reverse direction of what can these principles of these slam principles use for autonomous navigation? What can they tell us about the brain functioning as a cybernetic system? And so one idea in general is that each of these operations in addition to having this fundamental significance of the basics of being able to move through the world, this task that any active inferential system that's gonna survive and do what it needs to do has to handle, but to what extent were these same principles repurposed for sophisticated inference more generally? And to what degree is sophisticated inference structured according to slam principles? And so if you're talking about like these like loop closure events, like this finding this highly familiar state and where you think you know where you are by this convergence of like your trajectory information and your pose? Well, this question would be like, is this a model potentially for the feeling of insight or a discovering of like causal accounts? And is this actually a inter and intra individual source of inter individual differences which would be fairly fundamental? Like how familiar do you think things are or not? How much do you update on accounting or on encountering these highly familiar situations? And so like me personally, what I've been wondering is, so to what extent like of all the details, so to what degree, so re-representing the biological details in terms of the kinds of descriptions that Tim is giving here and also though thinking of cognition more generally as navigation. And so thinking of like each of these things, what are the analogs at the level of thought as a kind of navigation through generalized space? Awesome. Yeah, there's been navigation. A lot of interesting work on exploration, exploitation in mind and space, Hills et al 2015 and on thinking about planning and the evolution of planning as being related to spatial as well as cognitive foraging, blue. So it's interesting to think about cognition as navigation. Like I think about the feeling that I have when I'm like searching for a thought, a word, a name of a person, like it literally feels like a, like you're scrolling through a filing cabinet, like physically looking for this like piece of information in your brain. And I know that like different neurons like stored different pieces of information like the Jennifer Aniston neuron and these kinds of things. So I wonder what, can you just elaborate maybe a little bit more on that, Adam, about like cognition as navigation? Is it like literally navigating to find the right like neuron that goes ding, ding, ding, ding, ding, ding, like that's the one I wanted. So like to something very, it could be in that like, so like two ways I might think of it is one, like you were talking before and you were like bring up like memory palaces. And I think that'll be like one example of thought as navigate navigation, which is like with this like art of memory in terms of like it can be like part of the way we access and organize the information we work with when we're thinking is through a kind of spatialization often and actually like to the extent that we show like really great virtuosity of memory. It's through an explicit process of spatializing and navigating through domain. And that gives us like the best purchase we have on remembering these building, constructing and accessing these complex structures. But in terms of like what you're describing of like something kind of like Jennifer Anderson or like neurons are like kind of finding the thing that's similar to mine there is like foraging theory. And so what you might be foraging for like when you're like looking for like something that fits is a certain amount of like familiarity or like some amount of like it could take different forms like some sort of maybe it's an entire set of code and it's like a visceral feeling of like compelling this or maybe it's like a more of like a proprioceptive code and it's like tension in the body. I don't know what it might be but there's some sort of read you're having on are you finding what you're looking for? And so then the idea would be like you're foraging for it. So you might spend a little bit of time in some area of conceptual space and you're kind of moving around in there like trying out different permutations. And then you're like, no, I'm not finding it. And then you might do like a different tack and then move to another area of semantic or conceptual space and you're searching around there. And so like in foraging theory these would be called I believe levy flights. And so there's like something called the marginal value theorem, which is like if you're trying to be an optional forager you're trying to like balance exploration exploitation. And so you don't wanna spend too long in a given like patch if it's not having what you want but you don't wanna leave a patch prematurely if it's a good patch to be exploited. And so the idea is like if you're looking for like searching for a name or something that fits it might take the form of this kind of like generalized foraging. But instead of you like locomoting through spaces you're actually spending different amounts of time in semantic spaces and then shifting based on whether you think you're actually it's like a juicy patch. And so like the marginal value theorem it says like if your current rate of foraging drops below the historic return then you should go. And so this like there might be different like inductive biases in the brain that would represent this like for example like the hippocampal system this might look like for instance like the way by plan is like you might get like this prediction error drops below a certain amount or goes past a certain amount and then hippocampal system like ramps up with a lot of her current activity is called like a ripple and then it resets itself in like retiles of space and like stimulates cortex and like a new like set of operative policies of spatializations might occur. And so like this would be like an example like a mechanism like hippocampal system I have like a built-in mechanism for like these like attentional shifts and these imaginative shifts when you're doing foraging that might be reflected also in like other ways like a dopamine. So there's like some thoughts on my thinking like Nathaniel Dahl will talk about like one way of interpreting a ponency between like phasic and tonic dopamine would be like the tonic would tell you like what's your average return basically and there seems to be like a kind of functional I don't know if it's not quite antagonism but like the more your tonic is high the more your the less impact for any phasic event is. And so it would be kind of like if you're foraging like what would occasion a shift or not you might actually see this like playing out like in the way neuromodulators work. And so I don't know if that like helps but the idea is like this is the sort of like thinking in terms of foraging or navigation through generalized spaces as basically being so the reason like, you know so I guess there's like a kind of like a priori reason like you would expect it to be generally good which is doesn't everything like most of things we encounter do indeed exist in spaces and then most things the way we understand them is by like spatializing them that's like we graph them we do like multidimensional scaling we do different things to try to like it's just like a way we understand things but it also seems like this was probably both evolutionarily and developmentally like this was the original challenge that had to be solved where the systems were paying their weight and that this would then involve like a repurposing and a redeploying for more abstract domains. So like there's some other like I think it's Burgess has done some good work where like finding like for instance like we might create like a morpho space and we're like looking like like they'll create like these generative like birds with like different like neck lengths and you're trying to do like categories over like what the different kinds of birds are and there's evidence like the way we do the categorization is we'll create a spatialization of this task structure and there's some evidence that the capital system is we're navigating through that and dealing with these categories. I know so that's, that was a lot of rambling but like I don't know if that got it with any of that land or... Yeah, lots of it landed, it was very cool. Thank you. One note that we'll I'm sure be exploring going forward optimal foraging theory is often put in the context of utility which is just the pragmatic component in active inference in terms of policy selection. So optimal active inference foraging is like when the epistemic and the pragmatic gain are being considered. So that's quite interesting. And then also there's as you brought up individual and collective navigation. So one case is sort of the sovereign robot, no Wi-Fi onboard computation alone in the warehouse but then in the case of for example an ant colony where there's stigmergy and modification of the environment that might be possible or even facilitated by different onboard cognition and different outsourced or offloaded which brings us to extended cognitive discussions but let's return to this question about going from this resiliency of navigation that a biologist knows exist or a roboticist might want to see how did you then go to that third space and actually have a hierarchical model? So here in figure two or wherever else you suggest what is happening here and how did this graphical structure come to be as opposed to potentially like other graphical structures that could have been used? Yeah, okay. So basically let me first maybe explain what all the variables mean in this figure. So going from the bottom to up, at the bottom we have the O's, OTs which are basically observations. So these are like the visual inputs that what the robot sees basically. And then we have two latent variables S and P where S is basically an estimation of all the latent factors that give rise to the observations. So this is, it's kind of a latent space that confines all possible visual observations. And P on the other hand is the latent variable that has a representation of your posts which is then basically in this case your XY position and head direction. And then we go, we have one more which is the aid action. And so that's what I hinted at previously. This is basically your proprioception. You know what you have been doing. So therefore, up until the past the ATs are also shaded so it means that you basically have access to what you have been doing. It's like the low level action like move forward to this velocity or turn left or turn right. But then for the future this becomes an unobserved thing and you basically want to infer potential actions you should be doing in the future. And then for the future a number of actions can be just summarized as pi which is then the policy. So the policy is no more than a sequence of future actions that you want to do or that you intend to do. And then we move up a level in the hierarchy and there we have two more variables. One is the location L and this is then basically kind of a more coarse grained yeah, representation of location where am I but it's not necessarily tied to a metric space. It's just like identify this is a location and then we have also an action space at this level but this is then more, we call this moves or M and the move is basically kind of moving from one location to another. So locations can just be more coarse grained like what you define as like a separate location in your environment. Whereas at the lower level the states if you move from one state to the other it's basically just saying, okay what would be the next frame of my camera sensor that you like. So it's also crucial to know that these two levels operate on different kind of time scales. So the lowest level time scale is basically a very fine grained action perception loop. I send the motor commands to the robots and I read out my camera and this can go to I think we implemented it at like 10 hertz so 10 times per second we get a new camera frame and we update the motor command whereas locations just like moving from one location to another can take you maybe 10 or more time steps at the lower level. And then of course one of the crucial bits is then how do you identify a location? And so basically how it comes to be is that if you basically observe your space through the camera images we infer this abstract state space S and then we look at what is the surprise of being in this state versus what I predicted and if this surpasses like a certain threshold so yeah, it's pretty complex to explain but I'll do my best. So every time step you get basically a new camera image and you can calculate like what is the KL divergence between what I was thinking about this state previously and how did the shift with my new observation is then basically your kind of the Bayesian surprise that you have from seeing this observation and every time step we accumulate like this number and once we surpass certain threshold which is kind of a hyper parameter we just fix it with something that seems to work. So if you surpass the threshold you basically say okay this is now distinct enough from n time steps before so it must be something new. It must be a new kind of location in my map and then you add this thing as a distinct location in your map and you add like a link from the previous location to this one and then you continue further on building the map and this is how you then kind of create this more coarse grained map of your environment and you can then reason like how should I move from this location to another and you can then basically use your previously visit locations and moves to kind of have a model of how can I move in this space but at the more coarse grained level moving from one location to another and it also means that by using this surprise on infertility in space that we infer from visual input it means that it's not every edge has the same distance in real space necessarily. So if you're the robot is traversing a white hallway where the walls are perfectly white and everything's the same then it will take a long time for this threshold to be reached and it will only at larger distances make a new node because everything looks the same anyway so there's no extra information that I need to put in the map but if you then suddenly see a door for example then you suddenly reach your threshold and you add a new location in your map and so it's basically the locations in the map are basically defined on what is the information I got by looking at this thing rather than how much meters did I traverse in this time and it makes it a really interesting way of representing maps because it's more like how much information is there in this particular location rather than which X, Y location is here. Does it make sense? Yeah and it feels to map onto the distinction between like Kronos and Kairos like in this analogy here Kronos is the chronometer it's the watch time it's the geometry it's the X, Y grid it's one meter and then Kairos is like the timeliness of action and the time it takes to complete an action and then something being like boring that's going through a low information hallway in the cognitive foraging. It's like are we there yet? Driving through this feature list landscape where the salience is low because my predictions are basically being met almost too well and there's a lot of other ways to go with it. Out of anything to add and also again I'm just kind of curious like what is the space of the possible graphical structures? Do you think that this is the only or the sort of grandfather graphical model for hierarchical navigation? Like how do we go from imagining this tactics and strategy multi-level model to a graphical model that we believe will be consistent with active inference and amenable to message passing? And maybe to add on this last question I think Kristen has a paper on deep temporal hierarchical models where he basically lays out more generic to like the if you're looking at the ultimate generic hierarchical model you'll find it there where it's basically just an aggregation of states that give rise to observations that go from one state to another given an action and then you can add a level on top and a level on top and you can keep on adding levels and that's kind of the prototypical hierarchical model I would say. And what we see here is kind of particular instantiation where we took some domain knowledge like okay for navigation what would be sensible levels to think about and what would these latent states then what should they represent? And then of course you easily end up like okay at the lowest level I need to have some state that represents my camera images and at the higher level I want to have some locations in the map and how do they then relate? Okay I probably also want to have this spot integration so maybe I need to have a special first class citizens here in the form of my posts and that's how you basically build up the model from first looking at this prototypical hierarchical model then looking at yeah but in my case what do I think is sensible? What could these things mean? And then try to figure out like okay at this stage we kind of converge on the one hand from top down like does the map work out? To bottom up does this map match with my implementation? And then at some point we converge to this basically. So the high road and the low road they do lead to Rome after all. Exactly. Adam what would you add in there? Well first of all I love that metaphor just now and the other thing I would add is so like on my end I'm trying to use this as like abstractions for understanding aspects of like brain functioning. So let's say we're talking about these this top level, this is one way of interpreting like hippocampal place cells and these whether you create a new node or not based on what threshold of familiarity or surprise this would potentially be like an example I think of an extremely powerful source of individual differences. Like maybe you could think of this in terms of like Piagetian like assimilation or accommodation but you could see like if your thresholds are too low or too high you could end up representing a domain and ways with they're either like you can get false you can get things that are like excessively granular or insufficiently granular to represent the domain and you can get different types of errors associated with it. And so like this for instance could define even things like more like some people will talk like autism to schizophrenia spectrums and to say there could be like kind of underlying predispositions for this and it might actually be I think that's an oversimplification but you could think of these thresholds as some cognitive spectrums might just fall out of them as you get different types of minds if you have like a more granular like an overly inclusive or an underly inclusive category structure but some other things like just trying on this would be like if we're talking about this you know lower level and if we're thinking of like these like entorhino grid cells as well not quite being while the pose here aren't quite grid cells but this idea of like this mapping of your what your body position is and where you're oriented and then having this conjoined with what's in your visual field and then using these sources of information to mutually constrain each other and using where you are in space and these all being necessary like for instance this seems to be fundamentally what was needed to get the robots to work and we have not so dissimilar sensors and effectors and so one thought is I think it's Giamme Dubin has just messed up his name but he has this interesting here paper and like the default mode system as it's called like dark control and thinking of it as doing like tree search but to loop it around this pose information it's possible that some of the posterior nodes of the DMN are actually helping to encode this information like if you go like to the parietal portions that actually might be where is my body where is it reflected and then if you're going like to these midline portions of it you know what's in my visual field or what am I imagining and then if you want to go for like this future part that might be like the anterior portions so like the DMN or the dorsal medial prefrontal cortex maybe coupling with the hippocampus the anterior hippocampus but basically it would seem that like the stream these core aspects of the model might be ways of functionally understanding the core aspects of an agent architecture used for mental simulation that there seems to be a striking amount of convergence between the things that people like Tim are identifying is just like what's necessary how do we solve a problem and the features of neuronal organization that wouldn't be these defaults or canonical processes that you're seeing as really just fundamental like we call it the default because it's active all the time what's it doing maybe part of what it's doing is it's allowing us to do this kind of navigation through space and time yeah those are some thoughts navigation through space and time in the sort of territory sense would be clock time and metric space what is your X, Y, Z, T and then navigation through mental space and time might have some resonances or analogies but it's not a piece of grid paper so what is it and how does it become realized through ecology, evolution and development so awesome points thank you real quickly in terms of the time issue it seems like there's these multiple ways that time comes in it's like one sense of like the sense of time as you might be doing like accumulation based on like the frequency with which you're having like meaningful informational events like what Craig calls these like global emotional moments and then Neil and others have some I think with Murray Shanahan have some interesting models like the sense of time in terms of like the amount of like information flux through your sensors but then there's like another kind of time that might actually involve like over spatialization of a trajectory and you might be using something like path integration in terms of like how you're doing some types of duration estimation not necessarily just like accumulating like like the sense of accumulation but some of it might be like where am I in navigating through a timeline and I think you'll see this reflected in like the way people talk and we could like dismiss this and go that's just mere metaphor no I think like that metaphor is how we're actually constructing some sense of time which actually gives us like forms of control that we would not have otherwise where we're not able to actually extend our minds metaphorically in that way and then be able to basically navigate through spatialized time to keep track of some longer time sequences if we wanna plan with respect to them. Awesome, thank you, Stephen. Hello, and great to see you all. I was curious in terms of time well one of the things is do you think time is or this process is distributed around different parts of the sensorium processing because different sensorium have different speeds or different rates or different abilities and then the other thing coming back to time as well is people can be going through time they can be like or they can be time can come towards them or time can be sort of developing around them which often happens in Asian cultures it's a time is seen more as a something that's around them. So I'm curious like how these ideas of temporality might link to the ways that we imagine ourselves with our sensorium. I mean if it is indeed a case that some forms of the spatialization of if it is a case of spatializing time is happening both in some cases through some fairly abstract metaphorical and linguistic constructions then this would imply that you should be able to get meaningful cultural differences based on the shared constructions and maybe even some like quasi like warfian effects where like linguistic modifications could actually change the way people are seeing themselves and others in relation to time with some fairly significant consequences. So we're warfian effects to the jargon of just like the idea that our thought is constrained in some meaningful way by the constraints and affordances of our language our constraints and affordances on the way we think. There's like stronger and weaker versions of that hypothesis. Thank you, Blue. So it's interesting because as we've been talking about like spatial maps and spatial context like spatial representation navigating through space I have been thinking a lot about semantic navigation also like when we're looking for a thought when we're navigating foraging for information or in these context rich domains and I just wonder what is the semantic navigation related to linked to like is there overlap in the brain and the way that we process semantic orientation and representation and space also? So I was just curious. Hashtag maps of meaning. Yeah, maybe I can respond on this from my perspective and then I'll probably have to leave you guys. But so I think one of the most important things at least to me of how this whole endeavor of constructing this and doing this together with Adam is that we basically we start off with with robot navigation which is from a robot perspective is really like about X and Y position and about times and seconds. But what we do here is we by creating this model we basically get rid of this close tie of being grounded in seconds and meters basically. I think this is one of the most crucial aspects of our model because it shows that, well, on the one hand I think we all agree that we don't get born measuring time and seconds and measuring space and meters. So we actually get at least the framework that also gets rid of these metrics and just learns to navigate in this more abstract space. And if we can get it to work for navigation per se moving through actual physical space then I'm pretty sure that we can also extend the model further to navigate any abstract space because in the end, once we get the level above our sensory inputs then everything is in our case just a learned latent thing basically. So if we can make this work for physical navigation then I think we are already a big step closer to making it work for any navigation and indeed we come to the realm of like, okay I have lots of memories, these are tied together somehow because how these memories just came to be can I now navigate the space towards the memory that I'm looking for or if you are solving a math problem how do you navigate the space of math equations to get a solution for this thing? I think once we get rid of the time and space that are so tight for your sensorium and move towards a more abstract space and be able to navigate that this is the crucial bit here I would say. And so with that I'd like to thank you for the discussion till now. I'll try to be there next week as well and enjoy the discussion further. Thank you, Tim. See you. Cool, very interesting and also speaking to that cultural basis, there was the inch foot pound paradigm and that has in certain places also coexisted or been complimentary or contradictory with the centimeters, grams, second based upon something different and what would happen if a larva was born and the little one, the second was like what we call three seconds. Is that just a more chill larva because every one of its moments is taking a few more heartbeats it can actually take a breath per little one. So there's a lot that we probably don't even know how much we don't know until we're in feedback with these models that both fully realize not just in the equations but in the actual robotics grounding us in those actual observations of red, green, blue pixel intensities which is like when Adam was saying that robotics are an empirical platform to be studying the nature of cognition and brain and mind. And then here is where we move into a different space and time. And so that is an excellent point by Tim. Steven? Yeah, so following on from what you're saying, Daniel is I know Tim just mentioned about have the abstracted ideas but there's also maybe the grounded you mentioned grounded ways of knowing so it might be that the future we abstract out what we think it might be and then there may be ways where we feel what's right what feels so you hit a door and you feel there's an information affordance and it can ground. So I was wondering how much that can that supports not needing to have the metrics like you say and maybe the metrics that we've created like the second are partly because they're at that kind of sweet spot or somehow close to a sweet spot of what sort of makes sense someone had to decide on a foot or a hand so was it the small person's hand or was it the big person's hand? And they sort of said, well, what about John? And maybe it was the king, I don't know but it got decided somehow. So I think this is a good question about grounding versus abstraction. I don't know what the thoughts are on that. Yes, this higher level in a purely metric model it could be like, well, on the bottom level we're dealing with centimeters and 100 millisecond frames and the core screening is at the level of 100 times the spatial distance and 100 times the temporal distance. What this model is doing is something different and I think that is again this biological inspiration which is that what is happening in the level that the action perception loop is nested within is not just a bigger version. Well, it both is and isn't because it has some resonance graphical structures that support the composability and attractability of active models yet also it's doing something that's categorically different which is navigating in a more topological space rather than the more geometric space down the bottom. What are you thinking about, Adam? It seems like that the, well, I guess one is I suspect it is me at no coincidence that seconds are about the time to extent that they are and that we have like the metric units we use like a foot or a meter or a yard in terms of like, in terms of grounding and in terms of connection to affordance. So like a second for maybe a relation to a certain maybe like heart rate or certain like biophysical pattern generators or like the scope of working memory being an iconic memory being stable on different scales or like a yard or a meter like the basically the reach that should have your arm like peripersonal space and like a field of affordances around your body where it's particularly concentrated. And so, you know, these are ways in which like, for instance, like the granularity of our modeling would be influenced by language or their language would reflect it and maybe to different extents influence it but you would have like this other kind of granularity of space that might be different for different species but also within a species and within an individual like, for instance, like these like tilings of so like the hippocampus system will like lay out these like tilings of space for a task domain and they'll have like a kind of, they'll be refreshed like maybe it looks like every like three seconds or so but if you're in like a big space dome but this tiling will actually have hexagonal arrays or a hexagonal time, it'll fill whatever space it's in and it'll have different levels of granularity and so if you're in like a big space it's going to necessarily be less granular and if you're in a small space it'll be probably be more granular but there and there's other things that will influence the granularity also like there's like brainstem locomotor nuclear they're like detecting how fast are you moving and then this will influence the granularity of like your tiling of space and so that's, it's like some of it you know some of this what is relevant some of this are greening of a domain some of it will be reflected by like these basically innate like by architectural inductive biases and some of them will be more like socio-culturally constructed one thing that is the one moving on is I thought to say I like how Tim you know as he left like that was one hell of a mic drop because you know the thing is like basically yeah so he's already solved like in terms of kinds of grounding like not only is this like potentially a way of having an architecture capable of navigating conceptual domains and abstractions basically what Benjio calls like system 2 AI or what some would call like strong AI but it's also grounding in that it actually works as a control system for an embodied embedded agent and so now you might have a single system capable of both giving you symbol grounding by allowing something to be in the world alongside other agents and have actual actual semantics in terms of pointing to things in the world like to you know kind of like a semantic network just like symbols pointing at symbols but ultimately these such a system would allow you to ground up your semantic network in things in the world and shared symbols and so I mean ultimately this is a path to you know who knows how far you know anyone working on it today gets but this is ultimately a path to AGI built that takes on an activist principles of what you would need to do so which is more than a little exciting I think well I'm starting to see why people are interested in this excellent point about the symbolic semantics and logic and how that connects to the inactive insight the pragmatic turn ecological psychology biosemiosis like all of these features that get left on the cutting block if we take a computational linguistics approach only and computational linguistics plus the mouth movement already one has incorporated that pragmatic turn in that embodiment element and that's quite a different model than just the substrate independent symbol manipulator Stephen yeah I think it's exciting indeed because it can open up a lot of I'm really interested in mental space psychology and peripersonal space and working with space as an instrument and well I'm more than interested in it that's my main focus in many ways and but what's really been the challenge is giving it a plausible bridge right back because they're working primarily you're looking at what action policies are and what has been enacted and it's very hard to go back to observations because you're coming from the other direction so I think it'd be really interesting to see what this robot whether you could actually see well what kind of robot is this robot what's it like what kind of story patterning gets triggered by this robot and maybe if I it grew up with a slightly heavier wheel on one side or something it has a slightly different character you know so I mean I wonder if there's a kind of an actual a way that the action policies themselves start to encode a certain personality based on the physicality and sensorium that get weighted I mean that would be like in terms of this like generalization of slam principles as like a way of understanding cognition were generally I think that would be like a great test case and that like those are the kinds of hypotheses you would look for if your wheel is like weighted more heavily on one side or the other that could have all sorts of things like for instance like like what Tim was describing like this node creation where it's not just a function of like a metrics but on information and the and so this creates like a distorted space but but an adaptively distorted space you would expect usually because multiple forms of value are influencing the granularity of this sort of space like not so it's like a map it's like space is a graph but but a graph being shaped by multiple kinds of value and so you would expect things like like that kind of wheel asymmetry you're talking about to result in different kinds of graph spaces and this could then potentially have some transfer more generally across more than just physical spaces in theory also you know coming back to like that granularity issue like let's say like the way you move through the world is quicker or slower that itself then might have transferred not just to well how is it that I'm representing the space I'm physically moving through but that then could get reflected in cognitive styles in terms of like the way you you relate to like semantic spaces in terms of the way you're going to forge through them what you consider to be near or far connections so like this is the kind of like that would be like a whole research program which would be but using this as like a source of bridging principles of like so some connection between embodied experience and then more abstract thinking well maybe this is a source of some of the ways in which concrete and body experiences go on to shape basically your thought on the more abstract level as well very very interesting and especially when we think about the mental action papers that we've read like live stream number 25 with sanford smith et al about metacognition as mental action and so in this two level robotics case we're talking about the sort of lower level motion and then the higher level navigation of locations where we can say I want to go to the post office and then the grocery store but then we have this more tactical taking a step time scale that's a lot faster if this bottom layer is based upon cognition and perception also this higher level is not just navigation in the warehouse that's actually like where do I want to be in my regime of attention or what would I like to be considering from a metacognitive perspective how do I navigate to a place where I prefer what's healthy for me so there's all kinds of other questions about navigation in parameter spaces that are not just like topologically the cousin of the warehouse space but truly in the mental space fully blue Steven so since we brought up hierarchical here in the paper it looks like and I mean I can hold this for Tim Adam too if you don't feel like comfortable talking about it but maybe maybe you're good in the hierarchical space it seems that it's just a core screening like in this paper right so like you know the the higher level is just like a core screen version of what happens at the lower level and it's easy to see how that takes place or takes shape in like a topographical way it doesn't leave a lot of room for emergence and I wonder like if we jump to thinking about a semantic space you know we have words and we've got the relationships with words to each other proximity you know how close do you expect to find you know there to was or whatever so how how linked are the words together in a semantic space but I wonder if you know you can go into topic modeling and that gives you like a kind of a core screening but like is there more room for emergence in that kind of space maybe um or like what other like how do you I don't know how would you just allow for emergent things to to come through in this kind of hierarchical modeling or mapping I think um when Tim comes back um I could be worth it for him to go into more detail on the um parameters that go into uh node creation or not because I feel like to the extent that you're getting um some flexibility in ways of understanding like emergence that would be your your models of emergence and or that would be like where you would basically create um this core screen ontology of different levels of granularity what um what constitutes like an occasion for core screening of what kinds um it seems like the node creation would be where your assumptions about that of what kinds of emergence you might be dealing with um that's where we'd come into this kind of model um is it rich enough uh to conflict all I wouldn't quite say that um in terms of like you know all the different forms of emergence you can get um so some of it you know but that is for but for um by enough I mean you know enough to be you know a full on general intelligence but um but but I do think there is room for some some flexibility there um at that parameterization of Tim's model but he would explain it better than I would I'll make a comment and then ask a question in the chat and then also see you Steven so um it was mentioned that uh there is the embodiment of certain prior information in the continuous attractor network like if my position is this and I move forward then that is putting me into this different position in the can and um blue asked about topic modeling and semantic spaces so that territory is different so the maps should be appropriately different and so we could say for example like California is nested within the United States that is not violating any geographical priors but um what about a paper that's interdisciplinary that might be at the intersection of two different topics or depending on how those topic models are generated you one could think of every paper as existing across 100 different topics to different quantitative amounts and so it actually does bring us to this question which Sid has asked which is we're hearing a lot in in the paper in the natural speech and so on different complementarities lower and higher so here with the lower model is the finer scale and the the red i'm i'm trying not to mix metaphors but lower and higher is sometimes used with inner and outer or finer and coarser coarse-graining more specific more general so so what is that restaurant where those are the dishes how could it something be above or below and you get it but it could also be more like a kernel and more like the fruit and you get it and it could be more like sand or more like gravel and you would get it what does that mean or say we'll think and then allow Adam to speak first I think that's a deep and complex question and there's a couple levels or a couple entry points there but one thing is like you mentioned like inner and outer is another way of describing I suppose higher and lower and that's there's a I think connection between this arch this latent sand architecture and like in metal learning where you're having like a more fine-grained process for like that's more task specific and for for like adaptive fine-grained couplings but then you nest that or other you create like an outer loop which is like a slower aggregation process that's coarser but then allows for sharing of information across episodes so it's a kind of like so this is basically a kind of metal learning system and giving you this divide and conquer approach of like levels of granularity I don't think anyone quite knows yet what the significance of this is but one interesting thing about the grid cells is they seem to have a multi-scale hierarchy in their organization in terms of there's this axis I believe is as you go within entorhinal cortex as you go more like more dorsal to ventral or more superior to to inferior it you'll get these like discrete cutoffs of potentially implying some kind of harmonic nesting of scales maybe even coupling with like multi-scale like synchronous dynamics and cortex unclear but there does seem to be these like multiple scales reflected in the in grid scales in internal crystals having different levels of granularity of their spatial representations and the the thing that's interesting I'm curious about is like what are the one thing I'm curious about is like to what degree for instance do different like neuromodulator systems like some of the ways in which like they could be understood in terms of like them having their functions is maybe like influencing which granularity is most present like moving into like now in a situation of like attending to like the fine granules and or this is a situation where we're going to really go course and just ignore those details and that some neuromodulator systems their their effects on like neural dynamics might actually be best explained might be well explained in terms of this like adaptive changing of the granularity very cool and we see that in teamwork where sometimes it's time to control paste and just fill out the spreadsheet other times it's time to add a column to the spreadsheet other times there are this open-endedness to the task and including the local objectives and also even broader mission and one's own relationship to the team in the project so um perhaps if there's a similar biological substrate then it would stand to reason that there could be multiple representations that are emitted from some kind of common generative model. Steven? I forgot so one thing that I'm actually really interested in is exploring these SLAM principles in the insect case because they have like analogous and potentially largely homologous systems with the central complex and I feel like that's a like an area like I'm looking to look into next is like to really understand these things not just looking at the mammal case but I want to look at the insect case with the central complex and see the similarities and differences you know what what special adaptations were there um what to what extent are the functions realized by similar means or different means but this um in terms of like a way of understanding things like like that neuromodulator hypothesis of like influencing the granularity of spatial modeling it's possible although when you know necessarily um that on it based on conversations with you David but like um it's possible that you're getting like extensive homology and common significances for things like flexible influence of the granularity of the forging parameters it's possible those are like remarkably conserved and that you might actually see them that being said like you also were alluding to though the SLAM instance is different than insects that this is a this is a multi-SLAM um in general there's a lot of multi-SLAM happening you know inverted but it's also but I think not to the same degree and that so you might expect some potentially substantial differences there as a result of the SLAM problem being having that additional weights additional sort of challenges and uh sources of information it can leverage I can't wait for solitary wasp SLAM and you social colony SLAM even from what I'm hearing you could you could correlate these multi-SLAMs to regimes of attention in many ways because that that could dictate the nature of these granularities and where they are in the space in the place in the body so um because we often assume I think when blue is saying there about there's there's inner outer and the some of those are also false assumptions that that rolls out over everything that we assume so in the sense that inner is purer and higher is purer but often that might depend on the multi-SLAM that's been dunked at the time yeah like um one note then blue like coarser and finer slower and faster so one interpretation would be that the slower timescale is the coarser of graining in the sense that it's like seconds not milliseconds and the finer is the faster but there's also maybe an interpretation where actually the if you're clicking up mile by mile instead of foot by foot that is actually moving faster and it's coarser and if you're counting grains of sand at a time it's finer and slower so I think even sometimes the direction that one interprets is not there but it doesn't remove the dialectical nature or the intermodal nature but the the way that um is it going to be like um a sugar with all the OHs pointing down or all the OHs pointing up or some blend it has to go one way or the other but maybe it doesn't have to be any one specific way blue so I am wondering and this is also interesting in terms of the insect case if there's some kind of um maybe like a generic free installed like readily adaptable coarse grained version of a place cell like that's just kind of there is like a generic representation so like a perfect example I mean when you're thinking about coarse graining like a map at a location like you know where everything is in your house like you know where your bathroom is and you know where all the things are but you know when you travel you have an expectation even if you've never been to the hotel before you have the expectation of what the elevator is going to sound like how the carpet all looks the same with that funky weird pattern and the hallways twist and turn and like when you get into the hotel room there's going to be a bathroom and there's going to be a window and there's going to be a bed and a TV and a remote so you have this like pre-generic supposition of how the hotel is going to look does that live in a place cell somewhere that just like morphs into oh the bathroom's here and the lights here so that you're not stumbling around in the middle of the night like and similarly like there's all kinds of when you go out hiking you know you expect there to be trees and rocks and stuff outside and so do you just like mash your map onto like some pre-existing generic place holder a place cell place holder it's interesting to think about it's like a prior for the abstract space it's a prior distribution over hotel rooms which is totally encultured and totally based upon experience and then either one is surprised or not just as with when any other prior meets sense. Adam what do you think of that Steve? I mean these are details that are definitely like half are beyond the edge of anything I or maybe anyone knows but the I guess just like these how many different so one way that these place fields are sometimes modeled is in terms of like these like chain detractors and so like within the hippocampal system like you can get like this highly recurrent activity especially some of the subfields and you can set up like that this set of like these densities of recurrent activity and then you could have like information kind of moving between them as a like predictive map not just of like not just like here's the things but you know people debate how the best of things like Gershman talks about successor representations but that there's a transition structure of which attractor will active which are the places will activate first in the sequence and and so I guess the question I'm wondering is like so like you know you're encountering like a new scene and you're like laying down like a set of landmarks or a tiling of space like to what degree is this information encoded in um in what circumstances and to what degree is it encoded in like the local connectivity within the hippocampal system and whether you're getting like this set of schemas that you're going to like draw up like the most similar one based on some kind of like degree of similarity will cause like one attractor network to get pulled up over another and now this is the tiling you're working within and this is like the policies that the policy set you're like would be pulling from and this is your framing of the domain um or to what degree is it relying more heavily on like the moments of moments um like cortical dynamics and like the mental simulation that might be happening and like to what extent like is it constrained by the history of plasticity within the hippocampus what schemas are going to be operative um and to what degree is it more of a negotiation uh where it's going to be more ad hoc more determined by the moment moment contingencies and this funneling through the whole like cortical virtual reality machinery or everyone to think of it um and to what degree is this there's not one answer this varies both across and within individuals and this is actually like a core aspect of cybernetic functioning is actually leaning more heavily on the priors or the current of established over more more learning ethics or leaning more on like the situationally specific factors like how much do you shoehorn what's going on into your existing schemas how much do you simulate how much do you accommodate and that this being potentially like a flexible parameter and maybe some of that flexibility is like is like baked into the logic of neuromodulators like actually flexibly influencing this uh unclear but like yeah so very very edge of anything over it's it's very much like part of the excitement for me is like wondering like is this like um does this provide like a like a like a functional and algorithmic significance for like this like otherwise like just mess of like biological details like these some of them have like elegant cybernetic understandings by virtue of like thinking of these kinds of questions can i just respond really yes please blue um so that made me think a whole bunch about like maybe we don't need a map a place cell for each individual hotel room but it's like a node when we were discussing nodes earlier like a core screen just an attractor space like dump all of the hotel room maps into this little nodule so that when you get to a hotel room it's all the same thing like it's okay i've been here before even if you've never been to that kind of place and and that would kind of explain things like deja vu like when you get to a place like i've been here before when did i come here like when you're driving down a street didn't i just drive down the street you it just is lumped over there into that little attractor mesh node that that's not necessarily a specific place but just like a squishy ball of similar places and also i really love how we spoke about some of these things adam and your collaboration with colin d young cybernetic big five in live stream number 13 i think at the beginning of 2021 and now we're revisiting same but different more things change the more they stay the same and this idea of like modulatory features of cybernetic systems as being important for variation within and among individuals is just such an essential point and it's so true that a lot of the details are yet to be resolved steven yeah you could go a bit further still because what if you don't have maps put to one side but there's action policies i'll actually be interested in how much this is feasible but action policies that recapitulate something like a map so like buy an embodiment buy an active chain of or initiation of some actions at different scales it basically recapitulates some sort of nascent starting point so you don't have to have any representations there and like when the door is engaged the a set of actions that that that creates could open up and effectively a map so i don't know how much between i mean i know in this model it does start to create a network of attractors but i'm wondering whether how much things could be sparked off from engage in a particular affordance a particular meaningfulness in something yeah i mean there's actually like a live debate about like the extent to which so like tolman initially was talking about the hippocampal cognitive map and part of the reason people like to talk about maps is because of the flexibility they give you in terms of if you like the map serves as a generative model and that it like it gives you like a i can flexibly access it to generate trajectories but you can do that with a well structured graph also but there's like a spectrum of interpretations of people thinking that you're dealing with more things that are more map like and things that are more context specific and graph like and and and flexible and and more dynamically constructed and it seems like you know it's kind of a cop up in general like i always like take every debate and i'm like you're probably both right and so it's like you want like for the sake of i guess maybe stability or for the sake of um potentially greater transfer of understandings and inferential power across episodes you might want something more map like but if you go to map like you're losing like the ability to deal like the nuances of like specific contexts and so um you know so this again is might be like different perspectives on the same thing that that tend to converge with tradeoffs like you could have more context sensitive or more situationally invariant representations and these could reflect cognitive styles and and maybe like fundamental axes of variation for cognition and sense making um um yeah the one thing that's uh in terms of like cybernetics uh dana like pointed out like earlier like you know like jordan peterson like part of like it's actually interesting that um maps of meeting um peterson's book was actually heavily informed by um joc pancepts affective neuroscience and ecological models more generally like based on things like animal foraging so i actually think that's where jordan peterson got that rhetoric about maps is actually from doing a kind of affective neuroscience perspective grounded um in the this ecological perspective of foraging elements and so yeah i think that's actually where that came from yes very nice and and i think i uh agree we may have even um explored as such in in one paper reimagining maps that i wrote with uh rj and a colleague michelle we learned about this historical distinction between archival maps and itinerary maps and today we just think about maps as being like one more lumpy category but there's many differences between the map that actually is like a multi map or mega map and holds a lot of information potentially in a metric way versus the itinerary map head stray until you see the two big trees then you're going to want to take a left that type of more conversational or casual or navigable map that is highly contextual and with current map systems that's one of the challenges because we have that mega map it's downloadable but it's not going to fit on your phone so what is the projection that's contextual and situational that's actually going to enable effective action that is also going to be perceived as map like by the user um yes blue so this is like a personal attack in question so anybody who's spent any kind of time with daniel knows that he's constantly drawing or writing or writing things down like all the time taking notes taking notes taking notes and i wonder like can you take your notes as like a mental map of where you were at that time that you were making that drawing like it does it like recapitulate your conversations that you're having the ideas that we're running through your head like can you use these drawings that you're always doing as like a kind of conversational like mental space map if anything it's the opposite i just leave the thought behind and if we look at this i think okay something happened by which we wrote this down it would be awesome if there was many people with the regime of attention on taking live notes i know that's not an approach for everybody but then we could all look at that shared pheromone distribution and then make it new again and maybe we remember our exact previous thought though that's impossible or unlikely but instead we can make the relationship with the material and with those notes like new again steven yes maybe a little harder to categorize with language but in a way it's the traces of your action policies even if it was capturing the interpretation so one thing i'm very interested in is how we realign things to action because we've kind of been inculturated to think it's all about thoughts and feelings right and because action active inference puts it in this other domain well what does that mean and how do we talk about active inference when we're inculturated to think through other natural ways of using English and i'm finding that a big challenge myself but i think so as you're laying down those traces are you showing your action policies of how to uh record something rather than your thoughts if that makes sense and is your thoughts always slightly hidden yeah great question just like we were thinking about earlier with a sort of substrate implicit agnostic ignored linguistic symbolic models like a turing tape it has infinite space and it can just make infinite grammar and so that's sort of purely symbolic approach to language and then its relationship to thought and experience is one question but when we think about finite actual sequences of language use and deployment as action either keyboard typing or assisted typing technology or speaking as also a motor action it totally changes the game because we're not just in this forest of infinite plausible grammars but rather one realized and preferred action sequence and then of course thoughts are unobserved but the actions are not and there's that's the whole quantum reference frame and share generative model and rhetorical nature narrative nature of cognition and so indeed the the action insight i think is going to be something that we'll keep returning to um well a final closing the plane or whatever and i'll also just give a little preview which is that next week we're going to be having a guest stream with adam so if it relates at all adam how does it relate what is the guest stream gonna be about and then um perhaps in your absence what would be exciting to discuss in 42.2 um now you're right there is a guest stream um a quick thought though uh so like blue you were suggesting like daniel's notes could serve almost like a kind of memory palace and an accessible map and then daniel you're saying it's more like a stigma just stick um mechanism for coordination like the pointing and queuing and that's interesting i wonder if like when people are reading books like the ability to like use the book as a like as a map slash palace like gives you better retention than something like a kindle like the book is like a physical object anyways so um oh yeah uh three will and so because for next week uh i would say that basically so so my like over hypothesis is that like all of the details or are there that many of the details that tim is using for these systems will end up being um relevant as abstract interpretations of the core um parameters of us as active inferential cybernetic systems both with respect to basically the the basics of it let's us be in the world in a embodied way as uh as uh living with living robots but um also in terms of their extension to um high level cognition scaling active inference to more complex domains and so like in general like um you know i'm i'm trying to understand you know with limited success as many details as i can of tim's work because i think it's just some of the most promising sources of like constraints we have on inference interpretations and sources of hypotheses so basically whatever detail is tim is willing to get into i think it'll be uh well well worthwhile to try to grok those and uh that's what i'm that's what i'm doing um in terms of uh next week uh so some of this is relevant in that um the basic idea would be that um these uh labay phenomenon these studies in which people are seeing these slowly building potentials leading up to action that have predictive information but where they peak um in advance of people claiming that they have decided to take actions and where these studies are interpreted as evidence that people don't have free will and we can get into you know all sorts of uh conversations about you know how we what makes how it makes sense to think of free will in different ways whatever makes sense to talk about it all but that these studies are usually interpreted as saying that your um consciously experienced mental states cannot enter causal streams leading to action and so what i'm suggesting is that it's not the case and that there's multiple ways in which your subjective states are meaningfully causal if we think of what's leading up to something like deciding to move your hand and like an alabatic experiment if that is reflecting a build-up of model evidence over your proprioceptive pose or of you deciding to take that action um this kind of evidence accumulation if it's occurring via sophisticated affective inference um that this would be a way of understanding it and that um part of what would influence how much build-up or not you get would be the specifics of this basically deep tree search from sophisticated and that we would might understand a sophisticated affective inference as being orchestrated by the hippocampal and enteral rhino system as iterative mental simulation where you're sampling out these different possibilities of what would it be like if i move my arm or not and then you're then evaluating that in your plant you're spinning peeling off these different counterfactuals and that if across these you get enough affective charge associated with going down some of these forks eventually this results in a threshold process but the idea would be that this would be relevant to some of the hippocampal slam work and that you can think of this imaginative planning stage leading up to even seemingly spontaneous actions you can think of this like imaginative phase as being largely orchestrated by the hippocampal system which might be best understood as a general system for uh or a system for generalized foraging and mapping I don't know if that makes sense but that's like some of the connections wow if you still want to come for the guest stream you're welcome to do so um no thanks for the awesome summary and I know that we'll be able to unpack it a lot more next week um blue or steven what are some like last thoughts what was exciting or interesting to you in where we've been and where shall we forage next so I can go I was really interested in taking this into like semantic space and I wonder what other kind of spaces that we can maybe think about mapping out um how we build how we construct maps cognitive space and so forth and I'm excited to um talk to Tim about where where emergence fits into the hierarchical model awesome thank you blue steven yeah I think keeping the the idea of keeping things clean keeping things nascent and deflatory even in this case here is really something I think to continue looking at and I I'm really curious about what happens when we see that door and that door says you can go down that corridor am I going to go down into a different regime of attention I'm going to stay where I am so I I think um that that ability to take these principles and say okay what is it that happens when I encounter something and it triggers something how much is that significant in terms of making things happen and I'm curious then how that can marry with a lot of the mental space work that I do building out metaphor landscapes or actually letting someone explore spatially and one last thing actually that came up and someone said to me that the other day is there's really curious why when he picks up his plastic pencil it doesn't mean as much to him as when he holds a shell so there's something about like when that door has got some prior there's something that you know about being in the woodland versus being somewhere which is dead and even that itself could explain a lot about how navigating those places the eco psychology of going into a wooded space a living place actually can be important for people just in a in a wellness what might seem a very woo woo type of stuff actually could be somehow linked to what we're talking about here so yeah brilliant thanks awesome yeah very exciting and fun discussion I really like that last piece which is what I know I'm going to carry forward and hopefully reflect on next week how will we build that colony is it a memory palace blueprint is it stigmatic coordination but how and so somewhere between the um mental palace cathedral and the sort of picking up pebbles and having no bigger picture is where we are in our cognitive and social maps and so that's quite an area and Adam the central complex modeling I can't wait so thank you all for joining thanks Tim also so Adam Steven and blue and Sid and Dean in the chat so thanks everybody and see you next time