 It looks good, go for it. Hello everyone, welcome to the Active Inference Lab livestream number 29.1. Today is September 21, 2021, and we are going to be discussing this paper, Active Inference, and Active Inference Framework for Ant Colony Behavior. This is a recorded and an archived livestream, so please provide us with feedback so we can improve our work. We are a participatory online lab that is communicating, learning, and practicing applied Active Inference. You can find us at all of these social media links and all backgrounds and perspectives are welcome here. We will be following good video etiquette or livestreams. Here at the short link, you can find all of our upcoming livestreams that are planned. And if you want to get a touch or if you want to participate, please reach out to us. And we will invite you to participate in a panel. Today, our goal is to learn and discuss this paper, Active Inference, and Active Inference Framework for Ant Colony Behavior. We will be maybe going through some of the aims and claims, keywords, figures, and having a discussion around this paper. So just to start off, we will go around and introduce ourselves and start off with some formal questions. So my name is Blue Knight. I am an independent research consultant from New Mexico, and I will pass it to Marco. So hi, my name is Marco. I'm located in Vienna, Austria. I'm a postdoctoral fellow at the Conrad Lawrence Institute for commission research. And my interest is very much from the clinical perspective. I have to do with DBS patients and have come across Active Inference, actually rather recently. Cool, Daniel? Thanks. I'm Daniel. I'm in California. And as first author, I'm happy to maybe just introduce the big picture. Do you want to just briefly say hello, then I'll return. Stephen, are you OK? Oh, yes, Stephen. Yeah, hi there. So did you just introduce me? Hi, it's Stephen here. Nice to be on. And I am based in Toronto. And I work with communities and work with spatial sense making around different issues and how Active Inference can now help to understand those processes of sense making and dynamic inference in the modern world. Awesome. So, Daniel, do you want to maybe introduce the paper or give us a summary of what was your big takeaway message from this paper or what was your motivation behind the work? Well, just hearing some of the backgrounds of those here, ranging from human, medical, and clinical to sort of social and collective processes playing out in humans, it just shows that we each approach Active Inference from our own experience. So personally, my background was in genetics and insect behavioral ecology. And so it was conversations and reading the papers of especially Axl and, of course, Carl Friston. And we were having conversations about, well, where does Active Inference fit into discussions about collective intelligence? Where does Active Inference fit into bigger discussions about evolutionary theory? Like, if Active Inference can be an integrative framework for different social systems, for different human systems, why not other animals, too? And so there we were at the intersection of curious about big questions in evolutionary theory, which I know we'll return to today, and specific questions about the areas that I was researching at the time, which was the regulation of foraging and decision making in ants. And so there was conversations with Axl and Maxwell to a large extent where we started thinking, how could we modify the types of models that we currently have in Active Inference and bring that into something that adequately at least starts to work towards capturing how the ants do it? And Alec had worked on the code of many Active Inference projects. And so that was quite a fun process. Maybe we can also talk about interfacing between ants in computer science. And we mainly wanted to navigate a little bit towards the bigger questions in biology, but do so by really tackling a specific scenario that would then lend itself to unique explanations and predictions and models. And the specific contribution that differentiates the Active Inference model from other Active Inference models is first that it's in insects and ants, but also importantly that it's Stigmergic. So it's modeling explicitly the feedback between the environments and the agents, whereas there had been sort of multi-agents Active Inference simulations where the agents communicated with each other or they had a shared theory of mind. Those types of models existed, but this was the first time that one of the action affordances of the agents included the capacity to modify the environment and the status of the environment modified the likelihood of the agent taking different kinds of actions. Cool. So I just want to reach out to Marco and Steven and just ask what's something that you were excited to talk about today or something that you liked or remembered about the paper? Either of you have thoughts? Should I start? So one thing, as I already mentioned before, is the role of dopamine, which I find super interesting. And you mentioned the paper that the foragers have higher levels of dopamine. So I was wondering about the role of that and how that influences affordances and what's the individual and also the level above, like the colony that does with that. So is there a specific role for dopamine? For the colony as a whole? Great question. And we wrote it down on slide 14. So we'll definitely return to talking about some of the molecular roles of dopamine. Cool. And Steven, do you have any thoughts? Yeah, I suppose mine is a bit more of a broad question, but I'll be really curious about that balance between sort of specific things relating to ants and how that relates to things like dopamine and maybe the broader process of the design thinking and working through this kind of active inference framework, just how that was useful to think about how things may relate to their environment. So I suppose the process of creating the model as well as what the model was and what it found out about things in the world. Thanks. Awesome. Dana? It's a great point. Almost asking about how do we study the biomimicry not of just materials, like light materials or strong materials, but of collective algorithms? And we can't really study the success or the attributes of a collective behavioral algorithm outside of its ecological niche. And so a very long and important branch of research by my PhD advisor, Professor Deborah Gordon, which is where I was when we were doing a lot of this initial work, she studied for 30 plus years, understanding how the statistical regularities of the niche were then associated with which collective behavioral algorithms of ants we saw happening. And so there's like 15,000 ant species there on every piece of land, except for basically antartica, which is kind of where they should be if you think about it. But if you want to have a framework for ant behavior that includes the desert and the jungle and all these types of different environments, it's sort of baked in this idea of appropriate generalization because we definitely can't have a theory for each ant species. So we need to look a little bit bigger than that, yet we don't want to have just an ant-specific theory only. And let's think about termites and bees and non-insects. So what is the appropriate level of generalization and how are we going to generalize the study of collective behavioral algorithms and how they interface with different kinds of environments? While also remembering that we've got to have our eyes on the ground, we need to be explaining those ants. We can't just simply spiral off into the abstractions about collective behavior. So the ants are a cool system because they remind us about ecological and behavioral diversity and the need to generalize. But also there is a real system that we can actually see. Awesome. Thanks. Stephen? Yeah, thanks, Daniel. I was wondering how much that generalizability that is in the field that you're in with your supervisor, that might be something that really brings something to the table for active inference. And I'll be curious as well how much those algorithms that had been worked on for a number of years either got modified or ported or had to be kind of reconceptualized from the ground up. That'd be interesting. Cool, Daniel, you have a response? Yep. So one reason why active inference is able to swap in to where other theories or frameworks have been used before is because at the core, it's an input output structure, just like a lot of other behavioral studies. Like sensory information comes in, something happens, and then action comes out. So in that way, it's able to map onto a lot of other ways that people have thought about animal behavior or ethology. One of the differences in active inference is that we're not committed to any specific cognitive model. So like in this paper, we just use pretty much the most simple possible cognitive model if you could even call it that. It was one step decision making. There was no memory. There was no anticipation. There was no capacity to use some of the sensory affordances that we know that ants actually have and use in nature, multiple sense, interactions with nest mates, interactions with other physical objects. So we mapped onto this input output structure, but we know that we can generalize it later. So that's one reason why active inference will hopefully play a nice generalizing role there. I hope, yeah, go ahead, Stephen. Yeah, that's really cool. And I suppose one thing that when you mentioned that, I suppose it's even ties in with that dopamine question, but I'm not sure because I'm not as familiar with that, but is by not having it all based on the cognitive, it's showing what the niche can do for you in a way. It's like, how much, you know, we all assume it's all going on in the head, which is the old way of thinking, but with the active inference, it's like, well, maybe there's other ways that we can get where we need to get, so to speak. Awesome, Daniel. And with the ants, it was almost never a question of it happening inside the head only, because it's not to demean their cognition, it's just very clear that no single forager would know how many seeds does the colony have? How many does it need through time? What's the weather and the trade-offs of foraging? Where are the seeds? I mean, those are questions that at a first pass we know are just unknowable. And so it's pretty clear that the way that individual nestmates are making decisions has to have a radically different mode than for example, let's wait till we have all the information and then make the well rationed decision based on our simulation. It's just not happening. And then once you see that lens on ants, you go, huh, well, maybe there's some other swarming settings where the agents don't have total information. And even if they did, it wouldn't be obvious what to do. Like we can all imagine a chessboard, we have total information about where the pieces are, but even experts in chess don't necessarily know the best move and there might not even be necessarily one perfect best move. So it really moves us into a setting of thinking about how action is undertaken, including action that can be epistemic, like gaining knowledge amidst uncertainty rather than waiting till we've resolved our uncertainty because then what? We still wouldn't know what to do. Cool, that leads in pretty nicely to one of the questions that I had brought up last week. So there are these colony level metrics, right? Like how many seeds are in the nest or how many nurse maids are there relative to infants or foragers relative to, you know, baby ants, I guess they're infant ants, infants. So how are these colony level metrics retained in a colony? Good question. Well, you asked a why question in a sense. So it makes me think about the Tinbergans for wise, which we've spoken about on other live streams, but the evolutionary why is just colonies that have certain colony level attributes are more likely to succeed. And so we end up seeing the emergence of certain ratios of male to female nest maids or certain ratios of forager to nurse performance just by virtue of their survival. So that's sort of an evolutionary circularity just like in active inference, we talk about how the systems are resisting dissipation. So systems that are doing it it sort of explains itself, they're here and we observe them because they're resisting dissipation. And it's really this question about colony level phenotypes that fascinates people who study collective behavior because there's the traits that a nest mate has which are not so different than the traits that any other insect has, like six legs, what's the width of the head, what's the gene expression of the gland or the mid gut, like what digestive enzymes are they making? Those are things you can measure about one nest mate. And so the body plan in ants is really quite similar to all insects, but then there's this really qualitatively different level of organization with a colony. And how do those phenotypes arise? How are those colony level phenotypes in feedback with nest mate, how does selection shape those phenotypes? Nice, thanks for your answer. And there's a couple, so maybe let's go to the slide that shows like where you're measuring the colony level metrics or the coherence, right? Like the colony coherence, or I forget the term it was used, but you have the model, or maybe we should start with the model really, I guess. But maybe do you want to just give us like a quick overview of like the model and the point of it and... Yep, let's go to slide 15. And so just to bridge from the colony phenotypes back to the specifics of the model, it's sort of like if there was a music festival, there's all kinds of phenotypes of the individual participants, like their self-report, where did they move? How is their decision-making being influenced by their sensory stimuli? But then there would be certain descriptions that you'd only have one of for the whole music festival, like what is the average distance of people? So that's what we were studying with the swarm coherence metric, but it just points that there's gonna be two totally different types of phenotypes. Some can be measured multiple times within a colony per nest mate, and other times there's gonna be phenotypes that are only arising at the level of the colony, like the sex ratio or the average distance. And what our model did was we specified the process by which the nest mates basically perceived and acted. So it was an action and perception model at the nest mate level. We didn't explicitly have an input or an output for the colony, not that it couldn't be done, you could also think about where that might come into play, maybe in dot two, but we looked at a setting that has been used to study colony decision-making, and then we specified a nest mate level model and put it in this context where people had already been using it to study the dynamics of how colonies make decisions. So here it's called the T-maze test, and there's two arms to the T-maze, the two red arrows on the left side here of figure one A, and food is on one side of the T-maze and it can either be static or it can be switching. And this is something that they use in lab rat experiments as well. So the colony's task is to basically get food. But what we did is we didn't actually encode that as like a colony level goal. We really went down to the nest mate level and we specified just like any other active inference model, what are the sensory observations made by nest mates, which in this case was just the pheromone density of the squares around them, what are the action affordances of each nest mate, which is sort of, it can move in any of the adjacent squares, and then what's the generative model, which entails a preferences and a free energy minimization happening inside the nest mate at each time point so that given the observations at a certain time, the actions are selected. And in this case, the preference was for more of the pheromone. So like the pheromone can be interpreted as a attractant, something that the ant pursues or prefers to observe. And the only other tweak, which is what introduced the pheromone in the first place and the whole concept of stigmergy was that on the way in, after they've touched the seed, but not before they also deposit pheromone. So that was an additional action affordance to deploy pheromone. And there wasn't any decision-making about that. It was just if you've already touched the seed, then you lay down the pheromone and if not, you don't. And so it's sort of like those simple rules, pursue regions of higher pheromone density, lay down pheromone if you've touched a seed, but not if you haven't. Those two rules at the nest mate level recapitulate some of the phenotypic dynamics of colony decision-making. So that's why it's fun to think about, like maybe we can look at complex collective phenomena and then understand how they emerge from simple interactions among agents and agents in their environment. Steven? Just a quick question is, I'd be interested because you've got say one ant that goes off foraging type of thing. And then maybe you've got the second one that follows. And then I suppose there's what goes back to the nest if I'm right. Yeah, I was just wondering like how much more complex or does it help to have more in the modeling or does it make it more challenging to start using too many ants? So we templated the natural history, if you will, of this off of a Formica ant species, like a wood ant. So it's referenced in the paper. This dynamic of like go out without leaving pheromone, come back with pheromone. So given that natural history to change the number of ants from 2,000 to 2 million, it's just a number in the config file when you run the code. So it actually is not challenging to scale the colony's size. It might burden your computer or cause some other limitations and it certainly will change the collective outcomes, maybe in a linear or maybe in a non-linear way like we explored in the figure three. But there's so many natural histories and ways that ants forage, like there's tandem running or like an ant remembers where the food is and then it does like a leap frog and it takes an uninitiated ants, nest mate, and leaves it and then when those two return, they bring back four and then there's eight. So there's all kinds of recruitment and food localization strategies in ants, which is why it's important that there's not just like one single way that ants forage because they're looking for things that have different regularities in the environment. So within a natural history, the script is written in a way where it's pretty easy to change the map shape or to change the number of ants or their appearance dynamics. And then a little bit of the challenge would be like to adapt it or transpose it to a different natural history or to add multiple interacting pheromones or other kinds of things, like interactions amongst the ants. Marco, you're muted, yeah. Sorry, I just wanted to ask about the model as it is described in the paper because at the beginning, the ants have basically random movements because all the surrounding step movement opportunities are the same probability. So is it sort of like a Brownian motion until the first ant discovers the food source and then starts altering the environment by the pheromone placement? That's exactly right. And there's seven supplemental videos, GIFs, where it's shown and what it kind of looks like on this T-Maze. It looks like you see these little particles that start swelling and diffusing up from the bottom. So that's a total Brownian walk because there's no preference for direction when there's no pheromone deposited. And then you see one ant find the seed and then it starts laying down this pheromone but it's also diffusing randomly because at least initially, there's also no preference for direction. But then you can imagine that there starts being a higher concentration of pheromone on the side with food than not. And eventually that gets reinforced because it starts to attract anyone who sort of diffuses into this regime of locally increased pheromone density, they start to get pulled to the places where there's increased density. And then that ends up reinforcing and as soon as there's like a ridge of high pheromone connecting the food back to the nest entrance, that's the formation of a pheromone trail. So no ant needed a bird's eye view or a goal to even form a trail. It's a purely agent level process-based explanation by which you get the emergence of a phenotype that goes beyond what one nest mate knows or even prefers. And that's the existence of a trail. Now, you can imagine that colonies that have the appropriate parameters like the right amount of explorer versus exploit, how much to follow one pheromone, how rapidly should the pheromone decay, those parameters are actually shaped by the evolutionary process. Colonies with maladaptive decision-making of nest mates are not gonna bring home the seeds, they're not gonna reproduce. Whereas for any given niche, if the regularities can be captured by the decision-making algorithm that the ants use, you'll see those parameters plus slight variations on them in the future. Awesome, thank you. And that kind of brings us to a figure two, which I threw up here. And I clicked this quote from the paper. So it's talking about the rate of pheromone decay. And it says, informationally, this decay rate can be thought of thoughts of an environmental perturbation or drift that counteract the reinforcing effect of digmorphic convergence. And so in this, and here, even in the figure of legend, it says, we chose a representative simulation that did not do the parameters to discover optimal decision-making. And so I was just wondering if you played with the decay rate at all, because I think about if the pheromone didn't decay, there would be a really difficult time switching, like from one arm of the maze to the other, when the food source switches. So you have to forget at some point where the food was, and this kind of brings me to one of the big questions is how important is this decay rate in the memory of ants? And so this is clearly an example of niche modification where they lay down the pheromone and it externalizes their memory of where the food is for other ants in the colony. But there's this, it has to decay, otherwise they're just confused. Like if we were flooded all of the time with all of our memories, unable to distinguish our memory from the present time, which happens in some forms of neuro-diseases, neuro-atmpical people have this inability to distinguish the present from the past from the future. So just is that like a critical thing or do you have any comments on that maybe? Or maybe Steven wants to comment on what I said and then you comment. Or Steven, do you wanna go first or what do you think? Okay, yeah, I'll just add one thing to that. Yeah, I think that's a good point is I suppose how much does the ability to allow the environment to re, I suppose, re-remember, or not re-remember, but reformulate. So you may not need to forget as much as just change how much you're waiting what you give priority to. So in a way it's like by knowing something that you're more certain about it might get something similar. So I'll be interested about that idea of, you know, of closing onto the environment and how that maybe negates some of that anyway. Cool, Dennis? Yes, so it is akin to forgetting, but what's really interesting is it's the colony forgetting because the ant doesn't have a direct memory. So as we encoded it, so it's a lot like cumulative culture. There's a totally qualitative difference between what an individual remembers on board and then what the culture remembers. And so we do modification on our cultural niche, Greetings, Scott. And so that enables new dynamics of learning and forgetting, which are distinct, have different mechanisms than the ones that happen at the nest slash individual person level. And so it is a forgetting that occurs. And otherwise you would get Stigmurgic Reinforcement, Stigmurgic tightening and it would end up just being overfit to the way things were. And then you wouldn't be able to access new resources or have the flexibility to find the seed after it moves to the other branch. Awesome. Sorry, did you have something else to add? How goes it, Scott? Hi, Scott. So here, we'll switch on to... Oh, Steven, go ahead, Steven. Yeah, just one thing, Daniel. I was wondering how much you think... I mean, I know it's an ant model, but you know you're talking about extrapolating to beyond. How... Like you just mentioned that with culture. So that... I wonder if there's some principles that's starting to emerge that you think you're kind of curious about exploring further because of that ability to think differently about how we know. The precursor model to active inference was actually on slide 53, the acquisition of culturally patterned attention styles under active inference. So this was a visual scanning, visual foraging model where the eyes scanned over pottery designs and there were cultural preferences for certain local motifs. And so it's kind of interesting that we almost stripped the cultural interpretation away to apply it to ants. And I do think that it could be reintegrated in. It just helps us clarify that we can know what the individual level mechanisms are like a personal preference, what is being sensed by the person and what are their action affordances, kind of like all the variables in the active inference model, perception, cognition and action. And then the external states can be other agents. And we've talked about communicating agents before, but the external states can also be something that's cumulatively modified. And so when your sensory capacities include the ability to read websites and then your action affordances include the ability to modify websites, it would take transposing the model into those different inputs, generative models and outputs, but it does suggest that we don't have to make everything only existing as a function of interactions amongst active inference agents. In fact, cumulative culture helps us distinguish pieces that are clearly cognitive, clearly on board from outcomes that are clearly cumulative and have memory that isn't just what the agent remembers. Almost gives a rationale for fashion in a way that could be interesting as well. When you think about design and pottery, there's a fashion of other things as well. That makes me think about like who controls, how the seeds, which is arms. So here it's the spatial movement of the seed from one side to the other and the switching dynamics, but fashion, I mean, I guess what people wear or just like, oh, it's a fashionable book. It's sort of a way in which people's niche modifications include locating and moving or hiding seeds. And it's like, oh, you didn't know about this band? Like I foraged better because I know this band that you don't know. That's sort of like hipster informational foraging. And so how do people signal through fashions? Keep up with the times. Cubism is the art that we all look at. This is what we think is cool or this seed location is no longer in vogue. So maybe some types of foraging metaphors could help us understand cultural phenomena. That also feels like... Go ahead, Scott. That also feels like Irving Goffman kind of reference of projecting out as you wanna be perceived that recursivity so that you, if you want to be perceived as a mountain climber, you wear a shirt from a mountain climbing store. And so you're expressing a thing and you want to be perceived as that. So there's that kind of that mirror effect that goes on to define ourselves by how others see us. It also reminds me the earlier notion of the cumulative embodiment. Reminds me of the David Bohm work that all reality is first a thought. And so the realities that are accumulated are just accumulated thoughts that then become artifacts of that interaction. And then their embodiment is left in that intergenerational communication through language and through material culture is where the mind resides. Really in the interaction with those things. So we're able to communicate in deep time that way. It reminds me that quote that Bach and Mozart didn't die. They turned into their music. So we still communicate with them. We still hear them at what they were communicating out. The realities that they were trying to create became embodied and they're still available to us through deep time and over space as well. And so that maybe part of the cultural affordances are the leveraging of the communication, the leveraging of self and identity. And you start to have people able to pick up on those and leverage it themselves and then they take it and take it in a slightly different direction and then you have that innovation that happens. And one of the things that find fascinating is in fashion and in food are two places you do not have intellectual property protection of copyright but they're the most innovative or if you're from England, innovative areas which is fascinating, right? Because they don't need the excuse for intellectual property laws, the rationale is that it helps to promote innovation. But in fact, the most innovative things don't aren't afforded that protection. They're more in a foraging kind of enthusiasm that make those things. You end up with the intergenerational maintenance not just construction and enshrinements of a pyramid but the actual ongoing maintenance of cumulative culture. Whereas it's not enough to just do it once and then leave it behind. And so it is interesting about how the process and carrying the process forward gives you the static objects of cumulative culture but the cumulative culture and the static objects themselves they don't map one-to-one onto a process let alone adaptive one or a successful one. And so how do we, and that relates to the way that reward was modeled in this simulation and also to dopamine. So I know we'll get there. Awesome, Stephen and then Scott. This reminds me as well I was that there was a marketing manager that I once spoke to and he was telling me that they pay people to go into bars and to drink certain drinks in trendy clubs in like the big cities, say in London or whatever so that they would be seen to be drinking that drink. And then it starts to spark off. Of course it's completely covert but this sort of thing you can see how that distorts people's stigmagic trail because they see someone who's presumably very good looking or fashionable or very high status or whatever at the bar, sitting on the stall, drinking the drink in a conspicuous way and they take the cues at some level. So anyway, I thought that's interesting. You're sure. Okay, so I think we can maybe move into the... Wait Scott. Oh, sorry. Oh, sorry, Scott. I was looking for the, can you hear me? Yeah. Okay, sorry. I was looking for the controls. I totally lost my screen to sort of screwed up. In the U.S. Steven, there's a thing. I was looking for the word for it but there's actually, when you go into a bar and say I'd like an absolute vodka please. If you've been hired by absolute, you're required to say I've been hired by absolute vodka. I'd like an absolute vodka please. They actually have a disclosure requirement for that exact behavior where people go in bars and if you're being paid, you need to make a disclosure as a technical matter under federal law. So I thought that was pretty interesting. There was another point I was gonna make but I spaced on it because I liked your point so much. Sorry. It's like an active disclosure. Here's my preferences. Here's my expectations. Here's some components of my generative model. Right. Awesome. Michael? Yeah, I was just wondering since you were talking so much about the distributed cognition now and making sort of extrapolations there, there clearly must be some limitations there, right? Because I assume or guess that human culture societies have a larger capacity to change phenotype compared to, let's say the limited variables that an ant colony might have and that might be subject to evolutionary pressures. And let's say is there a larger flexibility in phenotype for human culture, would you say? We almost have affordances that include changing the hyperparameters, the higher order parameters on our niche. And clearly the scope of our niche modification has expanded as well. So it'd be like if this ant colony, we added another level, it can decide where to move the seed to or it can decide to modify the structure of the maze. I mean, we know that ants build, they build their nest architecture in many cases. So what happens when you enable a new kind of importance, it reveals like a whole new game theoretic or signaling theoretic dynamic that can flip how it would have been otherwise. Like in a fixed maze, maybe there's a strategy that works, but maybe if you add this ability to move blocks, maybe the strategy is actually to make a hypotenuse just going straight to the food. So that totally changes which parameters of learning and forgetting you might wanna have. And so it's a very interesting question about how affordances come into play. How do they interact? And then what is happening with humans with our new affordances? Our ability to have long range synchronous conversations and to make irreversible ecological decisions as well. Those affordances for niche modification and communication are changing what it means to be human. Definitely agree with that. As we kind of mentioned in the dot as zero contextualization video. Any more thoughts on that or we can maybe move on to these colony level metrics in figure three? Figure three, just to describe the two on 23. On the left is just a tally of basically the seeds returned home, just the count of the round trips. And then that is being shown through time steps on the x-axis for four different numbers of foragers like 10, 30, 50 and 70. And so again, we weren't like tuning parameters or making statistical claims, but it's interesting to see that the 30, 50 and 70, the ones that kind of look like step funk same accumulation dynamics as the others do. And then on the right side is that swarm coherence, the distance metric amongst the nest mates. And so again, maybe this could be run out or it would be different on the map shape and all these other things, but just to see that the average distance sort of increases and then stabilizes at a somewhat characteristic rate. So these are phenotypes that nest mates don't have, but they're composed of the phenotype, the position, the behavior, et cetera of the nest mates. And so again, it just points to the need to be clear, like when are we talking about individual level phenotypes that might be in feedback with environments or with collective outcomes? And when are we actually talking about specifically collective outcomes? Nice. So I have a question that relates to this directly. So in this kind of colony cohesion or I think you call it swarm coherence, there's a distance metric that is measured, but in real ant colonies, like are there other metrics that kind of are used to judge this kind of cohesion? Because I wonder about like the time that's spent communicating, like ant to ant communication time, like does this also like enhanced colony performance or what other kinds of metrics can you like use to determine this kind of cohesion? We talked about this a lot with the authors and we saw we thought, okay, well, there's like the Euclidean distance, the average distance or the distribution of distances amongst nest mates. One could also look at the correlation of their trajectories. So a perfect correlation of the trajectories would be like every ant is taking the same trip, whereas no correlation of trajectory would mean like every ant was going on a different direction or having different movement dynamics. And so that could be computed over the total trajectory of each ant or even at each time step. Like on the next time step, zero correlation means like that there's 50 going up, 50 going left, 50 going down, 50 to the right or maybe there's like this sort of shift, a coordinated shift within a time step as well as among time steps. So looking at the correlation trajectories, the mutual information, these kinds of things could be calculated for the colony and then we thought, you know, it might not even be easy to rank these because what if the mutual information is low because they're doing different types of foraging and that is constituting a more coherent swarm. So just because their correlation is high, that's sort of like first pass. Oh, well, it must be a coherent swarm because everyone's taking the same trajectory. But then there's this other type of coherence that might even be more emergent when the correlation or the mutual information of trajectories is low and it's actually by virtue of that that the colony is able to achieve things that the Nesmates don't. Awesome, thank you. Oh, Steven? Yeah, could I just ask, when you mentioned that, could that about the different types of movements and correlation, do you think that could point towards how there could be different types of counterfactuals brought to the table? So maybe the information coming back as well as being a seed, it could be a counterfactual. I suppose the wall, it's not a seed, but it's a wall or I don't know, maybe that doesn't, I haven't thought it through, but I'm just wondering how counterfactuals might play out. And counterfactuals is an interesting question because it's a mental ascription which we think we can do, but then do ants do counterfactuals or how do they at least act as if? And it's a great question, just one thought would be the resilience of their algorithm is such that they act as if they're taking counterfactuals into account. So for example, their trail is functional when it's functional, but then it's always like this adjacent possible that the trail could be disrupted. And when the trail is disrupted, they find a way to go around or to repair the trail. And so it's like by having this counterfactual like the adjacent possible that they might not have a functional pheromone density, there's this self-repairing and regenerating aspect to their behavior in nature that makes it look as if they are accommodating these counterfactuals. And it reminds me of one of the greatest disinfo scenes of all time in a bug's life, I believe or maybe ants like where leaf drifts down and then all the ants are carrying the seed, of course in their hands, not in their mandibles, which is not how ants carry seeds. And then like the overseer comes over and blows a whistle, everybody stop, you know, the first ant that was returning home gets anxious. It's like you couldn't stack. I know it's just a movie, so I know, chill out, et cetera, but you couldn't stack more misconceptions into that representation of how resilience actually arises in foraging. An overseer doesn't come and the first ant engages with the barrier and then goes around it or over it. And that is what ends up allowing the rest of the ants who don't have communication with any overseer or even that first ant to participate in that refunctionalization of that part of the trail. So it would be cool to see representations that don't disinform as to how ants actually work. That's a really cool, that's a really cool response actually. I also suppose you could flip that as well and say, well, maybe the adjacent possibles of these paths that the ants are taking in the environment is maybe that's how we get our counterfactors in the brain. It could be like a foraging path, you know? I'm not saying that's how it is, but maybe at times there's something that mirrors that but in our own cognition. Scott? You know, that reminds me of our misunderstanding of how ants work. It reminds me of the assertion I read that dogs in the wild don't have the alpha dog kind of characteristic and the impact mentality. I don't know the research on this, but that they learn that from humans that, and so it's interesting we kind of project onto animals, human ideas about stacking or whatever social status. And it's fascinating to think whether an art, because we were talking about fashion before, might the arts be a vehicle for inviting humans to start to understand things that ants know? Right, you know, what can we find from the art world, a movie or something like that, which did a different treatment of it and started children embracing that, right? Because these are children are learning from these movies. It's a fascinating notion to think about how a more, I use the word accurate portrayal, but a more maybe sensitive portrayal to the dynamics of other biological beings might help us as biological beings to find our way. Because certainly the abstractions that we've, we're teaching children now are not fully functional. So there may, there's some additional learnings there. A conversation I was on just before this, we were talking about capacity building in precarious populations. And I was pointing out is similar kind of thing where we're saying, well, we might have things to learn from precarious populations. It might not just be parachuting in colonial type solutions, but rather what can we learn from new discoveries? And this was in the context of the idea of bringing technology to, for information to precarious populations, but the reverse of it bringing community knowledge to technologies, the reverse flow. And again, similarly here, maybe the community knowledge of ants needs to be brought to the technology architectures that we're talking about rather than us constantly being blinded to the architectural elements that we might have because we have these centralized elements, these alpha dog notions, these things that we project out. So we're unable to learn because we're just imposing a structure on something that may not be there. So we can't learn. Nice, Daniel? The history of psychological projections onto the ants and the bees is tremendous. It relates to sex and gender and politics. I mean, she's called the queen and she's called a she, but she's just a diploid insect. So how do we go from the words that we use and the kinds of models that we use to describe how the colony functions? Is it an economic maximizer? Is that the goal? Is GDP maximization? Or is getting seeds the goal? And that was one of Professor Gordon's insights as well is like, it's not just about maximizing the per capita return on foraging in terms of seeds, which optimal foraging theory often suggested because in the desert, it's costly to go out foraging. You are spending water to get seeds which give you energy and water, but it's kind of like there's a time where you don't actually need to get more seeds. So the way that the projections occur and that they are mapped sometimes implicitly to our human systems and ideologies is very interesting. And I think that relates more to this general point in active inference, which is actually that sensory observations are ambiguous. That's the insight of Helmholtz and unconscious perception, which is like the perception of ants, even if the edge detection is relatively unambiguous, what it is that the ants are doing, whether they're taking care of every nest may and it's like utopia or they're sloughing off the useless ones and it's like hellish, that's our projection. And so that's why it's so interesting to have a model where we can be clear about what the sensory observations are of ants, of ourselves, and then how our generative model is actually what is generating perception. You know, we see with our mind, not with our eyes. So the ant is also perceiving with its mind, not just its antenna. Awesome. Stephen? That's a really very, very good point. And also I think with Helmholtz, bringing that back in is really useful in terms of how we project, because he was a way for us to get back to what we already knew. It's like with all our intelligence, we, as psychology came in and this knowing and this perspective that knows, and that becoming this almost unspoken axiom that we didn't even know was unspoken. Everything from there on in was about what do you think about this and what do you think about that and what do you think about the other? And a hundred years later, we're in all sorts of problems, right? Well, we're very quickly in all sorts of problems actually at the beginning. So suddenly yet, going back before then, I'd be interesting to know about 300 years ago, they'd probably look at us with a wry smile and they'd be like, yeah, right? Because they hadn't had a whole load of other stuff that involves this knowledge of knowing as being about thinking and us and it was much more in the kind of unknown states of the world. I mean, their dictionaries weren't even there, right? So they didn't have that defined language in the same way. So yeah, I think that's a nice point. Well, language becomes, oh, sorry. Scott and then Daniel. Language becomes that artifact. Again, that's why I keep, yeah, again, it's not accurate, what I say when I say the mind resides in language and material culture, but I just keep saying it to get people off thinking that it resides in the brain. And so it just be that those traces we leave of prior interactions that seem functional become so ascendant. You're born feral. And then you kind of look and observe what works and it's so localized. Knowledge is so localized and now it's become the access to broad knowledge is so great that there's nothing that suggests that the human lifespan was designed to be able to absorb all the knowledge that's now available, right? There's no, we're not necessarily able to do it, but we try. I certainly try. I got, I'm surrounded, like, look at all of us that have books around us, you know, we're surrounded by this and we're trying to figure all this stuff out, but there's nothing that says we should be able to and certainly not as individuals, right? So then I start thinking, well, okay, I'll take solace in the fact that we, this is the synthetic intelligence thing I keep yapping about instead of artificial intelligence. We know. So that's good. So that's pretty nice. So that means that what I got to do is just cultivate the we, not the I. So that jump is what it feels like, I'm wondering. It just, it feels like humans could be better served by at least being sensitive to this. We're not going to become ants, right? But we would be better served by a sensitivity to this and it feels like there's a vast amount of human desuffering that could be made available if this could be made available. And I'm not sure if it's through the arts or through economics or what the mechanisms of influence will be to allow the enjoyment of the additional solution space or additional existential space that these observations offer. It doesn't necessarily change the way we act every day, but it definitely could adjust the existential aspects of the human condition. And that seems like they would indirectly serve to reduce anxiety and maybe hatred as a result. Anyway, sorry for the ramble. Daniel? Steven, you brought up 300 years ago. And so it wasn't exactly 300 years ago, but it's within about five to 10 years of 300 years ago was a book called The Fable of the Bees, Private Vice's Public Benefits. And I think it's worth reading one of the summaries of what this poem and book is about. The thesis is that contemporary society is an aggregation of self-interested individuals necessarily bound to one another, neither by their shared civic commitments nor their moral rectitude, but paradoxically by the tenuous bonds of envy, competition, and exploitation. So it's a bit of a parable about private vice and public benefits in a bee colony and about how and why that succeeds or it doesn't. And let's work with the preferences and sensory capacities and affordances that we have and then think about how to engineer that with stick-emergy and cumulative culture to the kind of collective outcomes that we want to see rather than try to optimize it at potentially a lower level or without the whole picture in mind, which could lead to not just private vice but public vice as well. Nice. Steven? Yeah, thanks. So in a way, what you're saying when they're talking about envy and such, it's more like, I suppose, a phenotypical or social property rather than this individual, oh, you've got it, well done. But now we tend to now focus as an individual behavioral abnormality, I suppose, and you lose that capacity or we lost that capacity or hopefully now we're gaining it to speak at that bigger level and not just the bigger level in terms of a statistical social science averaging, but one where people are actually still purposefully engaged in the process. So that mid-scale type of thing. So Daniel, and then I'll give my thought. Oh yeah, blue. So my thought kind of ties back to what Scott was saying about all the books that you have around us and also into this concept of synergy. And it's something that I was thinking about. Just yesterday we had this literacy report come out in New Mexico that was just horrible. The kids can't read. I mean, not for a long, long, long time. But maybe it made me think we are doing this method of communication here via YouTube and connecting over Zoom where we're into this oratory or even audio-visual, is there a bigger word for that? Like a video transduction of information. And it's like, maybe the need to read and write is becoming obsolete. I mean, as we're moving in and as we're externalizing all of this into our niche environment, I mean, in the history of early humans, history was an oral tradition. And then it became a written tradition. And now maybe the need for reading and writing is not there because we're back into disability to directly communicate across vast distances and to leave recorded traces of this audio-visual communication in time. I mean, yes, in a media that's less permanent. I mean, if the internet survives or whatever. But I mean, we're all under the assumption that in 50 years, if humans are still alive, this video will be archived in the way back machine or somewhere, right? And so maybe are we losing the need for that? I don't know, just a thought. Dan? So I'll give a comment on that related to ant brains. And then we can go to the dopamine discussion. So there's a scaling relationship that people have observed with social mammals where the larger the group size of the mammal, the larger the relative brain is. So that suggests that sociality in mammals includes like memory, the ability to simulate counterfactuals potentially, theory of mind or thinking through other minds like we talk about inactive inference. But being social for mammals is clearly like a cognitive load. And in the ants, a trend in the other direction is observed where larger colony sizes have often smaller relative brain size of the workers. And that's one of the key differences between social and you social. So a lot of times people will talk about the bees and the ants as being social as if they're hanging out or doing it because they like each other. But the reality is that ants have been colony living for 120 million years of evolution slash one creation however you wanna look at it. And they're not doing it just to hang out. So it shows that once you make this phase transition from social, which might have increased cognitive burden to you social, it might be associated with decreased cognitive burden and decreased cognitive necessity of the individuals because it's like, it's good to follow the trail and go off on your own and just wander around and leave the trail when you find new food, et cetera. Those are simple rules that aren't necessarily requiring tremendous onboard cognition. Yeah, Scott. Maybe that's why populations of humans don't have to be educated. Maybe we're going back to that and maybe we don't need high levels of teaching for everybody in society. There will always be, there's always gonna be a distribution of education and many dimensions of that education. And all we can hope is that the cumulative context supports everyone. I mean, the reality is even if we tried, not everyone's gonna be a brain surgeon and a rocket scientist and an ant scientist and this and the lawyer and the doctor and the grocer, right? So we have to specialize and parse. And so it's kind of interesting because the aspiration of self cultivation suggests that you want to have a strong body, strong mind and every individual needs to be the representation of everything, but it's not maybe that aspiration of power being residing in an individual through individual capacities is not scalable. Even though it's observed in mammals at a certain scale, maybe at a certain scale, you have to be used social because the intrinsic complexity of the society and the information available is such that it's impossible to reside in a single brain. But again, one of the things I say is that I tell people the brain is infinitely capacious because it's just an antenna. And it's just tuning into the mind which itself has whatever capacity, every capacity. So again, maybe we need to train more, not in substance but in process of learning or in knowledge discovery rather than in fact, so that people can navigate the mind that's collective with their brain that's an antenna. I had a similar thought actually, Scott. Like maybe, I mean, we're becoming so connected like, you know, and even just a hundred years ago we were very dispersed. And so maybe as we're becoming more connected, we're becoming more like a high, we're developing more of like a high mind where like the reliance on individual survival or cognition is less. So Stephen, and then we'll go to Daniel and Ant-Brains. I'm just gonna say just instead of just in time manufacturing, it's just in time thinking. And also tying into that process piece, you know, how much it is about getting the knowledge, i.e. the book off the shelf and how much is it knowing the policies for how to engage that book can be the thing. And that's where it can be valuable to see an old video or to have met, you know, someone who actually met Einstein would have known that other side to how he wrote those books, right? We just have this encoded form. And there's other ways things can be encoded, right? And that may be what you experience when you have a chat with someone, maybe a professor or someone who wrote the book, which is very, there's that quality to it. And that might be something where thinking about, because in a way these ants really, they've got to have trust in their action policies scaffolded by some sort of pheromone trial. And we, if you actually look at most psychiatry, it is basically taking the individual and trying to maybe reduce their anxiety or the distortions changing serotonin or dopamine levels. And I think what this is showing is, well, you know, that might make someone less anxious and less, you know, less challenged in their integration into society. But there might be other ways, and that speaks to eco psychology and other things, which maybe seemed a little bit hippie before, but they can be scientifically rigorous from what we're talking about. And that's great. So just a quick little interject question because Steven brought it up. And this is something that I've been like thinking about and I was gonna bring it up with the counterfactual point, but like, is there a role for trust in the social insect synergy? Like, maybe Daniel has examples of ants that are liars or ants that can't be trusted or like, is there some kind of like deviant form of ant thought or behavior? And then we'll bring it up. I'll give one interesting example of that, which is under the clean division of labor model, one would expect that queens should just be like, super fertile and workers should never have any capacity for fertility. And so when it was found empirically in honeybees and in some ants, that some workers had partially activated ovaries, like they retained the capacity to reproduce, which is also facilitated in those insects because unfertilized females can make male eggs just by making a haploid egg. Initially that was framed as cheating. And it was in the game theory of cheating and all of these ways that you could think about economic exploitation, free rider problem, but at the same time, it's the case that queens do die. And when that happens, the colony is almost hedging its bets against one failure mode by having this just in time capacity to salvage some reproductive fitness with pre-activated workers. So was it deceptive at one level? Was it adaptive at another level? There's many examples like that. And I would say it all falls under that ambiguous stimulus. Heading? I'll ask a question from the chat and then we can, because it will take us towards the brain. Sergio asks, how can the autonomy of an agent, for example, in immune B cell, living within another autonomous meta-agent, like a mammal, be accounted by colony behavior and stymology? It's an awesome question. It speaks to the nested formalisms. So ants also have mobile cells and we're abstracting around that when we just have a nest-mate level model, but potentially nested modeling could speak to that because T cells are perceiving their local niche and then they're acting in their niche. They're releasing cytokines and they're changing the immune context which can bring other T cells or B cells down. And then I posted a paper, distributed adaptive search in T cells, lessons from ants with Professor Gordon and others that was explicitly about that. It was like a triple play collaboration with ants immune cells and computational search algorithms. And the way in which computational search algorithms and computational frameworks like active inference, although not in that paper, give us a common descriptive ability across the ants and the T cells, but then once we frame it, we learn from the ants and the T cells and the richness of nature and that expands our computational models. So it's like a feedback in the informational niche between the kinds of cognitive and computational models we propose that helps us generalize and then we get surprised by ways that nature actually works out that feeds back into how we modify our models. So maybe anyone else can give a comment on the B cell and the ants and then otherwise we can go to the dopamine. You think we're good? Yeah, to go to the ants and the dopamine, actually Marco, I was curious how, I know it's a big question, but how has dopamine been seen in the human neuro community and clinical side and where does active inference come into play or expand on that because I suspect that we might see some parallels with what it's doing in ants as well. But what is the situation been and where is it going with human dopamine? Well, I think that depends on who you're asking. There is for sure one big focus on the rewarding effects or let's say the role of dopamine in addiction and not just as a positive prediction error signal, but then of course you have, so and that you might put at the dopamine tract that comes from the VTA and goes basically throughout the cortex and also to the nucleus accumbens, but then you have also the movement facilitatory role of dopamine, which stems from the substantia nigra and innovates the striatum and basically serves a action, policy, selection role via selecting competing affordances, I guess you could say. And by that, you select one movement strategy and if you lack dopamine, then basically you're freezing in your movements and that's one of the cardinal symptoms that you get in Parkinson's disease. And as an overarching principle, I guess, at least that's the intuition I have right now is that dopamine somehow modulates how someone like an agent interacts with the environment. So it's more this interface. How do you weight signals from the environment that you can react to? And that's I think the aspect that combines the rewarding function of dopamine with the motion and action permitting function. So yeah. Cool, I see a lot of interesting parallels with the ant world. So first for those who might not be familiar, ants and invertebrates have a lot of the same molecules in their brain as mammals do. So like some of the same exact neurotransmitters like serotonin and dopamine, which is why some of the same drugs that mimic the structure of those neurotransmitters have effect in invertebrates as well as in vertebrates, but they don't have the same anatomy of the brain. So like you mentioned the VTA and the substantia nigra that ants and the invertebrates don't have those exact structures. However, they do have some structures that play similar computational or functional roles. So point one is that's an argument for why we need to look beyond just the brain layout of the mammals because not all cognitive systems inside or even beyond animals have that exact layout. So it motivates just like we've been talking about like looking at the specifics, really understanding how does these different dopaminergic signaling pathways work in the mammals, but also what are they doing functionally and computationally so that we can understand and design other systems. And then dopamine has been studied for playing a role in the regulation of individual foraging in invertebrates as well as in vertebrates. So people have done experiments with giving dopaminergic drugs to insects and seeing how it increases their foraging activity. And even in the ants that are shown on like slide 14, those ants, the ant going into the nest entrance there is painted because it's been treated with dopamine. So it's possible to collect ants and then either treat them with dopamine and measure how much it influences their dopamine levels in the brain. We found that increases in brain dopamine through pharmacology increased foraging activity. And we found that providing an inhibitor of dopamine synthesis decreased ants foraging activity relative to like a control group on the same day. So on first pass, that is consistent with this reward role for dopamine. And that's the kind of empirical observations that people have found confirms their belief that dopamine is playing a role in reward. But then what you said at the end there that reward and action come together with sensitivity. And so we can actually add in something that we know about these ants and about other ants to think about how does dopamine play that role? Well, in this ant species, we know that individual nestmates make the decision to go foraging or not probabilistically depending on the rate of interactions that they have with incoming successful forager nestmates. So if there's a high rate of incoming successful foragers, it stimulates more foraging. And if there's a low rate, it ceases foraging activity. It's like if you block incoming foragers from coming in, there's a lag followed by a drop off in foragers leaving. And if you have an excess of foragers coming in, it leads to an excess going out, which eventually does deplete the workforce. So it's not like there's infinite ants they can just keep on sending, but that's a self-regulating process so that when things are good and food is coming in, send more, it's a good day to forage. But if there's only a trickle or there's like a predator blocking one of the paths, then no more ants go out. And so one of our hypotheses, although it would take a few other kinds of experiments to really go into this in detail, would be that dopamine plays a role in modulating the sensitivity to interactions of nestmates with each other and their sensitivity to changes in environmental conditions. And so by framing what dopamine does as modulating precision or modulating sensitivity to interactions, we integrate the mechanisms that we know actually underlie the regulation of foraging in ants with the empirical outcomes that increases of dopamine signaling increased foraging activity and decreases decreased foraging activity. So Stephen and then Mark. I'd be interested where your thoughts are in terms of the type of foraging, whether it's kind of an anxious like foraging, if that makes sense, because often I think that some of the serotonin inhibiting medications, they're helping to reduce anxiety. And there could be a difference between the way the people are reading those sensory inputs. So I suppose could the ants be also foraging more if they're anxiously foraging, if that makes sense. Maybe a sudden peak. And then at some point that doesn't become such a dominant dynamic for addressing reward. Do you want to respond, Daniel? And then we'll go to Marco. Yes, there hasn't been a huge amount of work on serotonin foraging, but there is one paper I'll post in chat, which is serotonin modulates worker responsiveness to trail pheromone in the ant Pidoli dentata. So it is known that different neurotransmitter systems play a role in modulating the sensitivity to interactions. And whether that gets psychologized or projected, again, I don't think that's always the right move to make, but empirically serotonin does play a role there. So that makes me wonder is, so the neuro modulators that you usually find in mammal brains, do they all exist in ants as well? Like, including norepinephrine, acetylcholine, et cetera. Daniel? So it's a very interesting question. There are some that are kind of like A-level celebrities in vertebrates and invertebrates like dopamine and serotonin. Then epinephrine and norepinephrine, which are very important for the regulation of arousal in vertebrates, their functional roles are, to a large extent, played by actually the neurotransmitter pair of tyramine and octopamine in invertebrates, which are trace amines received at the trace amine receptor in vertebrates. And so the exact proportions of these different neurotransmitters differ in vertebrates and invertebrates, but the metabolic enzymes are in the same gene families. And that was some of the most interesting analyses was actually to look at the sequence evolution and the duplication and deletion events of the metabolic enzymes that transform neurotransmitters into each other from common amino acids into neurotransmitters, synthesizing, integrating them. And then the protein receptors that are used on the cell membrane, like the dopamine receptor. So the gene families are homologous, the chemicals are similar, and neuropeptides, there are some that are ancient gene families that are shared across animals, and there's other vertebrate-specific and invertebrate-specific neuropeptide gene families. So they're more similar than dissimilar, and there's a massive amount of overlap. And that's why, again, like drugs that are designed for working in one species often have a similar effect, like neonicotinoid pesticides, they affect the acetylcholine receptors of invertebrates broadly, just like nicotine affects the acetylcholine receptors of humans. Right. Yeah, I think the really cool promise of active inference is that you can understand different cognitive systems in terms of the messages that are being passed. And there are lots of comparisons that you can make, and I guess that's also a good general test of the substance behind the whole framework to try and make those mappings, where you say you have signals that get passed from bottom up, like typically by a glutamatergic frames of spikes, let's say, they might be matched by expectation that come from top down, which might be inhibitory. And then you have all sorts of modulatory signaling molecules that sets the precision of certain messages there. So, and once you have that framework established, you should be able to find it in all kinds of different cognitive systems, whether they are structured like the brain where you have physical pathways that link one region and another, or whether they are, let's say, more based on externalized elements like pheromone trails or whatever. So, I mean, they're much at the beginning, but I think there's lots to discover. Yeah, Marco, I just wanted to respond to that really quickly. You know, we were, I don't know if you caught our last live stream. I think it was 28.2, just the one that was last week, but we were talking about, you know, it's interesting, we're talking about mental wandering, like what does mental wandering have to do with like origin or ants, but there's a lot of connections to be drawn. And really, like we're thinking about, you know, tracing a thought through the brain, like, like, which is, I mean, my background is also in neuroscience, which is like passing a message from one neuron to the next neuron to the, and really the basis of the paper was mental wandering and meditation and like the, not suppression of thought, but like when you consciously inhibit, you know, the thoughts from wandering around, so it's a mindfulness practice. And it's really interesting because in the ant colony, there's so many like relationships that I can see, like, I mean, even between like foraging, like when you're wandering and your mental wandering, is that like thought foraging? But I mean, also more than that, because there is that like synaptic release. And so is that, I wonder if that like synaptic release or synaptic transmission that happens in neurons is similar to this like pheromone deposition, right? So like you have synaptic release and that encourages more neurons to keep signaling. And so you have this pheromone deposition and that encourages more ants to keep foraging on those trails. And so I really do wonder how much of the, what our role is in neurotransmitters is externalized in this kind of niche modification and external cognition, which would happen for us intractably. But I wonder if the answer are doing that as part of this like high mind mentality. Sorry, Stephen, I was brambling. Yeah, no, it's very interesting points. One thing I really picked up as well, and with what you're saying and Marco is this message passing that you have in active inference, it is very liberating. And because you traditionally talked about in terms of signals and signals ends up maybe being electrical signals and it ends up being like a wire that we think of in the other like electronic world. And it's a very big trap, you know? So then there's a somehow a signal going through and then next thing, you know, everything implicitly or explicitly becomes a computer. But this, just this general idea of message passing somehow. So signals, still it's hard not to make signals about, you know, a transmission system more like in a wire. But message passing, at first I always do wonder why are they using that term? Because it seems so ambiguous and like what's going on with that? But in some ways I can see is so much benefit from that from that that formulaism. Cool, Daniel? Yes, the signal and signal transduction, signal processing, which is frameworks that are used in electronics but also we talk about like signal transduction in the cell. So it shows how wide the usage of that signal term is and it almost implies an essentialism to the meaning of the signal, like it actually carries a meaning. But when we think about message passing, we can be clear about what that message is and then look at how the generative model interprets what is the incoming stimuli or observation instead of treating the signal as what is bearing meaning. We can think about the stimuli as just being the empirical observation, which as we've been discussing from the sensory level on up through the narrative, observations are ambiguous and they require a generative model in order for sense-making to occur. And so, agreed that not only is message passing a way that Bayesian models can be updated like we learned about from Ferret de Vries and others, but it's also a way that we can step away from a sort of signal versus noise partitioning paradigm and move into an empirical observation meeting generative model meaning maker. Marko? Yeah, so what I also like about this, but I might have a completely different perspective from this than you have, but the nice thing about messages is that it takes you away from this object orientation where you identify things and you can say they either are okay or they're malfunctioning, but you rather look at the interaction between let's say vertices or objects and you find maybe more illuminating answers there if you think about psychopathology, for example. So that you can point to one brain structure and you say, ah, that brain structure is broken and that's why I have symptoms. It's rather to think about it in terms of, well, maybe this message cannot be transmitted and that's maybe the reason or that's where you have to look for answers. So one other thing about message passing that I think of a great way to think about active inference is it really grounds it in information theory because that's like Shannon information is like spawned from that idea of message passing and coherence in the message, but also in the active inference framework, it's always subject to interpretation. Stephen? Yes, I started to think as you were saying that Marko about someone shaking hands with someone else, okay? So you transmit a handshake, be past a handshake, you know what I mean? Because both people have to reach out whereas when you transmit a signal, it can be a one-way, a more than one-way thing. I suppose in some ways I think, I sort of agree what you're saying, Blue. I also would say in some ways that Shannon entropy is probably a bit stronger on the signal side. So in some ways, it gives that effective message passing but active inference goes further down, well, it seems to offer further down that route because it gives this other way to extract. So it's not like the noise is diminishing the signal, although there's other ways of getting at through action what's in the signal. So I suppose in a way it's a bit like the handshake. It's like, what's that hand saying to me, the funny handshake from the Mason? Well, I reach out and I hold it, you know, and I get a bit of a better feel and I see what I want to pass up my predictive processing chain. Daniel? It just makes me think about how when we think about signal occurring amidst noise, it's almost like we're getting one unit of stimuli, whether it's RGB channels on the TV or whether it's a frequency range for audio or whatever it happens to be. If the task is to find the signal in the noise, we're always going to get less than one. Could be 99% signal to noise, it could be 5%. So we're kind of always going less. When we think about the stimuli as being a message that's passed to us and something that we enrich, we can make it more than one. So that secret handshake or that shape can be unpacked and understood in a broader context. So for example, how many bits of information does it take to communicate how to make lasagna or something? It's like, you don't need to describe every single sequence. Somebody who knows, it only takes one bit of information, maybe just the letter L or something like that. And so signal processing, again, makes us think about diminishing and filtering out what we're getting in, whereas maybe active inference helps moves us towards understandings where there's an enrichment of the information coming in, which is always required, but especially when there's sparse or ambiguous sensory stimuli. Next, Steven. That's a good point, Daniel. I mean, that sort of takes you to the value of various fairy tales. Yeah, the idea, I hadn't really thought about that before, but the idea that the generative model is free to make more of something and embellish it, I suppose. And of course, that may also really come into its own when you're bringing in multi-sensory information. So, you know, there's this uncertainty, but there may have to be something which embellishes early on. Maybe that explains why children early on get very into dreams and very wacky sort of modeling of the world because they maybe want to extrapolate beyond, you know, what that sounds and that light source and that feeling and that smell of food is, you know, the porridge for the three bears that Goldilocks is eating is actually gonna signify for them. So, yeah, that's interesting. So, did we already have this like conversation about do ants dream of undead crumbs? Did we have that conversation Daniel? Is there a book? There's something, right? I think you dreamed it. Okay, okay. So that brings me to my next question was, you know, I wonder like, I just was listening to Josh Steinemann's lecture and I read his dream coder paper in which like the computer explores like potential recombinations of symbols and stuff in like a, it's like an off phase, like a Helmholtz machine, there's like a wake sleep cycle in this algorithm that he was using. And so, you know, I mean, I wonder, I guess ants sleep, I mean, they like rest, do they rest, they have to like assimilate and update their generative model. Did they sleep? So, people have studied circadian rhythm and the foragers have a strong circadian rhythm. Again, probably not every single species, but we did some work with gene expression and found gene expression cycling of circadian rhythm genes in the brains of foragers, but not nurses in this species. So nurses always indoors, low key working all the time, they take a lot of micro naps and foragers have more like phasic circadian rhythms, especially for insects that use the sun to navigate like honey bees. Well, so what I really wonder is what about the collective like ants generative model, right? Like you think about this as like a collective behavioral or collective consciousness, like is there, you know, dreaming or potential like recombination, like exploration, does that happen in like an ant colony? It's an awesome question and another example of true colony phenotypes. And people have studied the activity waves in ant colonies over multiple time scales, like a minute time scales as well as the circadian rhythms. And that is actually not dissimilar from the multiple bands in the EEG. So just like the electrical dynamics of the brain, which present as one sort of just time series, you can do a decomposition to extract out like the alpha and the beta and the theta bandwidths. So just like there's only, you know, one 2.4 gigahertz Wi-Fi wavelength, yet there's multiple Wi-Fi networks that can coexist. So how does that multiple coding happen for the signals that we make like Wi-Fi and radio for the brain, which is using different frequency bands simultaneously. And there's also these oscillation dynamics in ant colonies. So it's the whole colony and it's a wave, like what kind of signal technology are you using to detect these waves? I have to, or not you, but they go over those. Similar to the way that the time series of the EEG, you would do a sort of a fourier decomposition to look at the different frequency modes. Similarly, people look at the spatial position of ants and then can ask what are the activity dynamics through time. And one paper I'll post in the chat is from 2016 by Galblum et al. Emergent oscillations assist obstacle negotiation during ant cooperative transport. So actually having this sort of like oscillating group activity helps them not lock on too specifically to an object's direction that it's being pulled. Like if there's some object and then there's the nest entrance, yeah, okay, we'll pull in a straight line. Isn't that the shortest distance between two points? Yes, but first off, no one knows exactly where the nest entrance is. And second off, even if it were the most direct, it might not be the most resilient or tractable. But if you just say, okay, we're gonna pull a little bit this way and zigzag it this way and zigzag it that way, you get that wisdom of the nest mates because they each have a different estimate on exactly which way to go, they get averaged out. And if there's a little block somewhere built into the movement that this object is being brought home with is the ability to go around it. And so that's another example about how processes that are dynamic at the very, very micro level with the agents often constitute more adaptive emergent processes and more tractable ones. It's like we either get the resilient, tractable agent level rules or we get the fantasy, well, we should just move it in a straight line and everybody should know exactly where to go and if they don't, we're gonna punish them. It's not how it works. Awesome. And so that kind of leads nicely into, I think we were talking already actually about like the wisdom of the crowd and the example with the ox, maybe that was in the dot zero or maybe last week, but it really leads me to thinking about collective behavior and collective intelligence and just, that might be like a nice question for people to give thoughts on, because I just, it's in my own interest to find out what other people think about the difference between collective behavior and collective intelligence and where is any thoughts there on that? Stephen? Yeah, sort of linking that and what you're just saying now with what Daniel was just saying, in some ways the brain, so if you've got going more chaotic, which I suppose in some ways you can do through environment or just, and the environment will do it to you, and then you've got this synchronization of waves, as was mentioned. So you could go from that kind of synchronization of waves and I suppose it could become more coordinated or it becomes more chaotic. So you've got this kind of balancing terms of sense making between more entrainment between a more coordination of action, either between agents or between the sensory motor properties of the organism, or which would in some way be more focused, you would imagine, and has that risk of rigidity, but has the advantage of efficiency. Or, so brain waves, it seems to have a bit of a mid, this synchronization of brain waves gives you a sort of a mid-scale potential. So it's not completely chaotic, but it's not completely synchronized. Nice. Yeah, I don't know. I'm not sure what the difference is really between collective intelligence and collective behavior. There's this cool diagram, collective behavior and collective intelligence. And it shows cooperation and coordination as essential to collective intelligence, but it also then like that, those are both behavior, cooperation and coordination, and then what's left is cognition. And so it's cognition intelligence, or like what is cognition? Anybody have any thoughts on that? Marco, I know your background is neuroscience, so let's let Daniel think and then I'm gonna put you on the spot. Studying collective systems, it always struck me as interesting because the organism is a collective of cells. And so it's more like an approach that we take to any system to see it as arising from interactions at an even lower level. And so when I look at these very human-centric thought maps, informal thought maps, it includes things that open-source software, P2P, business, those are human-centric. And so it ends up muddying the waters and introducing a lot of psychology and game theory and terms that aren't like distinct, like is cooperation distinct from a coordination? And that's one reason why having a perception, cognition, action model, even though it comes across as very austere and distant from the phenomena that we want to explain, like coordination, cooperation and cognition, having such an austere but scalable and flexible model helps us get at these terms without presupposing them as natural things that exist in the world. So we get to just model what is on the ground happening like with the ants or on the ground in another context and then ask how our definition or parameterization or formalization of coordination and cooperation are coexisting as descriptions of the process that we're specifying at the actual proximate mechanism level. Next, Steven and then Marco. Yeah, I think that challenge of these things having a psychological or folk psychological or cultural labeling is limiting and by stepping back from that, which in some ways makes, creates more uncertainty. I could say that it seems more austere but it's more austere because it is less clear initially but the benefit of being less clear and this actually ties in a bit with some of the infusion space work that I've been working on is it makes it more nascent whilst with active inference, it gives that holding container. So by being more nascent, you've got more ability to then proceed again with another type of naturalistic, potentially more naturalistic framework and not being tied to a naturalistic framework based on language and all the distortions that can come from that. So I think that that's a really big part of what makes it useful. Next, Marco, oh, sorry. No, thanks. These are all super good points and I agree completely. And I think that this is also recognized in psychology that there is a tendency to attribute all kinds of terms to different brain areas. For example, like the prefrontal cortex, there are more functions assigned to it that it could possibly hold. For example, you could find papers that identify the location of stock trading behavior. And these are clearly human invented terms and there is no reason why they should have an expression in the brain. And so I think a nicer way of thinking about this and I think you mentioned it before is this dynamic between segregation and integration. So sometimes it makes sense to be focused on a particular task. Everyone, let's say without lots of distractions, you are very efficient at performing one particular task. But that might not be the most creative mode of operation where you might rather want to have lots of different brain regions simultaneously interacting with each other and exchanging lots of information. That's not efficient, but it's very integrative. So neither of those extremes is probably viable by themselves, but if you have a good flexible way of moving between segregation and integration, you can be very much task oriented specific, but you can also be creative and incorporate lots of different information. Awesome, Daniel? It's a good point, Marco, and also that meta task of appropriately choosing how to shift tasks is framed as a type of mental action in the nested active inference, the deep active inference formalism. So we can actually think about that shifting between different affordances at the attentional level as another kind of action selection task that uses this same perception, cognition, and action framework. So, and I agree as well, Stephen, it does start with a more atomic germ, it's more nascent, and that's what allows us to map and bridge and compose. If we try to put too much meaning into the lowest level, we actually constrain how tall the buildings that we can build are. And Douglas Hofstadter talks about this in the context of reflexive computer systems, brains, and ant colonies. Just today, as it was in 1979, three of the most important systems to understand. And gets at in all of those systems how the lowest level, like the interactions among nest mates themselves or the letters, they cannot have meaning in the way that, for example, words do. Because if you load all the meaning of words onto phonemes, you can't compose sentences. And it just is very interesting how even though of course we want that meaning, we have to actually take a dip into using pieces that are sub-meaningful in order to have flexible meaning. Because if you just try to run straight for the meaning, you're gonna end up with a very constrained system that might have the one function or meaning that you want to encode, but it won't have the ability to reflex or to generate novel meaning. Thanks. So just in the last few minutes, I just wanna thank Daniel for taking the time out to discuss this awesome paper with us, which I've been really wanting to discuss for several months. And hopefully he'll be back next week with more conversations about active inference. And Marco, welcome. It was really nice to have you here. And so just if anybody wants to give a final thought on this paper or some stuff that they're excited to talk about next week. Well, thanks a lot. This was really awesome. And I learned a lot there. There are also lots of references. I think that I need to check out now that you mentioned. And yeah, really thanks a lot for the paper and for presenting today. Stephen? Yeah, just to mirror what Marco says and also welcome Marco. Thanks for joining us. Really, really enjoyed the different avenues and actually the ability for it to keep coming back to something which could hold those different branches, which is quite nice. So we saw an ability to come back to and reintegrate a lot of different ideas. So that was really nice. So let's look forward to keeping that going. Cool, Dana? Well, thanks everyone for joining and blue for excellent facilitator and broadcaster role. I think for 29.2, but might be awesome other than having some novel estimates on the stream and in the live chat. What might be cool would be like to lay out some of these analogies that we've been exploring. Like, okay, agent and environment. All right, that's nestmate and pheromone density. There's like the environment that you can control and then there's sort of the abiotic environment or there's the parts of the environment that are outside of the affordance of niche modification, like the thunderstorm. Okay, so maybe having that kind of a table would help us align a lot of the systems that we've been talking about. So ants, computers, brains, societies, other systems that people care about. That's kind of like us preemptively foraging maybe laying down the inkling of some trails because just like a few years ago when I was deep in the ant game seeing that there was a way to bridge active inference and ant research in potentially a new way, maybe someone's gonna go, oh, I mean, architecture. I do that every day and I've been looking for a way to inroad active inference or another way that maybe we could help scaffold somebody else's research because I think the Stigmergic modeling and a lot of these discussions go to a lot of systems and things that people are already working on. So that would be awesome to talk about in 29.2 and to build momentum going forward. Awesome. Well, thank you again, everyone. And we are looking forward to our discussion next week. So get in touch if you are watching on the live stream and want to participate. Bye.