 Welcome to the Active Inference Lab livestream number 29.2. Today is September 28th, 2021, and we will be discussing the paper, Active Inference, an Active Inference Framework for Ant Colony Behavior. And today we are here with the first author, Daniel Friedman. The Active Inference Lab is a participatory online lab that is communicating, learning, and practicing applied Active Inference. You can find us at all of the links shown here. This is a recorded and an archived livestream, so please provide us with feedback so that we can improve upon our work. All backgrounds and perspectives are welcome here, and we will be following good video etiquette for livestreams. Here at the short link, you can find all of the upcoming and recent livestreams that we have done, so please get in touch if you would like to participate. Today our goal is to discuss the paper that we just said, Active Inference, and we are going to do some different papers in the upcoming weeks, like I said, get in touch if you want to participate. So we're just going to introduce ourselves and go through a little bit of warming up. I am Blue Knight. I'm an independent research consultant from New Mexico, and I'll pass it to Daniel. Thanks, Blue, for facilitating. I'm Daniel, and I'm a researcher in California. And I guess I'll start with what I'm excited about today. I was just excited that in 29-1, we could sort of forge out in a lot of different directions and make some initial trails to a few different ideas. And then now in the 29-2, maybe we can develop some of those threads slash trails and maybe get some of it down in writing, so it should look pretty good. Nice. Yeah, I'm excited too. There are some questions that I've been mulling over over the last week, and so I'm kind of excited to delve into those and also into the code for the model today if we have time. Being that it's just us, I think we might get there. Well, maybe more people will join. There might be a little bit of lag. I'm not sure. So I might want to check that. Okay. Yeah. So the big question of this paper is how can a group of active inference agents solve complex group foraging problems? Well, first what I'll say is if you're recording it locally as well as streaming, then we can always upload a high quality recording. And it does sort of bear on this big question. This is a quote from the paper, and that's sort of the big question in collective behavioral studies, which is, okay, if, if each agent, if each subunit of the system had all the information needed, well, then there wouldn't necessarily be a challenge for a coordination. It's like if it already knew what to do, then it would be a challenge. But how do systems make adaptive decisions when subunits don't have all the information? So in the case of an online team, each person's only seeing a different part of the picture. In the case of the ants, they're each in a little bit different part of their niche, but those are some of the similarities, which is that each nest mate is only going to be experiencing its local environment. And we want to have colonies that work. So welcome, Stephen. Maybe you can say hello while Blue is just testing some stuff out. Oh, great. Yeah, good morning. Well, in North America anyway. Yeah, I'm based in Toronto. And I work with communities and sort of systemic coaching approaches. And I'm very interested in how active influence scales in different ways. So even though this is about small things, it's also about lots of small things. Well, your audio also just had some weirdness. Stephen, maybe talk directly into it. Oh, okay. Is that better? Oh, okay. But you mentioned scaling in small. The one one funny thought again, this picture is from the ants that I was studying for five years in Arizona. And yes, the nest mate looks like a tiny guy, like a small ant. But then we know that these colonies can live for 20, 30 plus years. They're also several meters under the ground and they weigh, you know, dozens of kilograms. So then I'm thinking, okay, it's a similar age, weight and height to myself. And it's abuse. But let's see. How's that? Is it any better? It still is not doing anything. I just reopened it. But if it's not, yeah. Okay. Okay. Now it's just the slides or just a jitzy. So it looks like it's working now. Okay. Perfect. Yeah. It's the slides open also. It's just too much. Yeah. Yeah. Welcome back everybody. Fun times. The show goes on. Foraging never stops. And Stephen will rejoin us. Hopefully he's having audio issues also. It's a good day for technical mishaps. We're having rainy weather in New Mexico, which is never ever happens. So all the ants are underground hiding and the internet does not want for opportunity to apparently this is so. All right. So speaking of multi scale integration and and across the things that happens in ants and in the ant colony. So we talked about making this table last time. And so here, like what, how is each system related to the agent, the niche, the interaction and stigma B. So we started this, this table. Do you want to maybe take us through what you've started here Daniel? Yeah. See if you can get the slides again because it's not going to be visible at all on the stream. Okay. I can also switch. Yeah. I can switch the field. So yeah. So a lot of informal analogies and connections were being drawn amongst different systems. And that's kind of what complexity and systems thinking and active inference is about like finding patterns across systems. So maybe there's other ways to break it up. But at least one way we could at least start to classify and find similarities and differences between different systems would be this way with the columns. So we have the system of interest, like the colony is a system of interest. Then we can think about what is the agent level? What is the subunit level? And that might be the level where the active inference agent is being specified. We then have the niche that the agent is operating in. So that's the generative process that's presenting observations or outcomes to the agent, which is embodying a generative model. So that's related to this generative process versus generative model distinction. The process actually outputs the temperature or the observations, whereas the generative model is a model of the process. And then there's two kinds of interactions that are possible. So there's sort of a transient interaction that might be amongst agents. Welcome back, Stephen. There's the transient interactions amongst agents. And in the case of the ants, that's like an antenatal contact. And then there's the stigmagic interactions where there's a modification of the niche. And so sometimes it helps to think about these interactions amongst agents that are without memory as being like the transient horizontal interactions like the common example of the bird flock. Those are the birds looking at each other. These interactions amongst agents that modify the likelihood of agents doing certain future behaviors, but they don't leave a trace on the environment. And that can be contrasted with stigmagic or niche modification behaviors, which are defined by being the ones that do leave a trace on the environment. So maybe we'll explore different sections today, but we can always say, okay, let's return to slide eight and let's add a row. Rest, let's add a column or let's fill in some columns. So so far we have ant colony, brain and internet slash computer, ABCs, GEB, the systems that matter. Stephen, what would be a fun system to throw into this mix and to look for parallelisms among? We can't hear you, Stephen. Yeah, we still don't hear you, Stephen. Yeah, your microphone is definitely not working. Okay. So I started filling in for brain and, you know, it all really is so dependent on what the, what level you want to evaluate, right? So, you know, on one level, there's like the little sub regions of the brain. And he took us through, I think in the dog zero video, a very nice example for ants that has been kind of done describing the different like organoids or sub structures in the brain. Because definitely there are structures that all have different functions like the cerebellum is motor function primarily and, you know, the cortex is like higher level thought. And so there's, you know, all these different sub regions of the brain, but then also at the cellular level, there are interactions, you know, that that spans those regions that are from, you know, the cellular, cellular level. And what is the, what the niche is or the interaction, you know, might be different depending on like the cells are responsible for driving interactions between the organoids. So I'm not sure what the level of the level of analysis, the right one is. And then stigmurgy and modification, I think about, you know, long term potentiation and like adaptation within the brain is very maybe not stigmurgic, but it's a really important modification, I think, and sensitivity to different neurotransmitters and things like that. I believe it's still really lagging. Can you just stop streaming and I'm going to pick it up on my side. Okay. Go for it. So stop the streaming and then just let me know when you did it. Let's see how this. Okay, let's see what happens. This could be very, very interesting to do the live hack like this. Okay, YouTube, we're going to wait for it to catch up. Sorry, everybody. So it goes. Okay. Wow. We just hot swapped live stream feeds. So we're back in the game. No level of preparation can prepare for laggy internet, I suppose. No, and it's like half the time of mine and half the time on your side. But if you're watching live, definitely add any comments or questions because we're just going to be kind of exploring and chilling today. But continue to facilitate and I'll just be broadcast or roll. Cool. So the. Yeah, we were talking about the brain and the interaction and the stigmurgy. And then for the internet, I don't know what do you think is the is the agent or what is the subunit of the internet? Like, is it each server or what? Where do you have the subunit at Daniel or Stephen? What do you guys think? Well, so this sort of ties into what they might say in recent, you know, unit of analysis as well, but that's not necessarily quite the same thing as an agent. I suppose on the internet, ultimately, it's the. It's it's whoever's making. Who's at whoever's triggering choices. In the system. Which is normally, yeah, it's normally a person interacting with the screen in some way. I think the overwhelming amount of interactions on the internet are probably not human driven, because any human interaction is going to trigger. Truly a vast number of communications amongst other systems inside of the computer and probably remotely. So maybe. And I think this is one of the unique challenges, but also interesting areas is like the internet is a cyber physical system. So we could have one model where the agent like the subunit are physical devices. So we kind of draw the blanket. We focus on the partitioning around a physical object like Internet of Things. We have partitioning around things. We could also have a human centered partitioning. So partitioning around humans. But then there's a lot of gray space in between the humans, a lot of links that aren't human between the human links. And then another approach might be purely informational or computational and seeing the agents as software agents. That interface where how will we talk about the software and the hardware agents coming together is where the Internet of Things is developing and what it's coming to mean. I mean, one thing you could think about there also is when, say for instance, I make a choice or I do something on the internet and there's a whole load of there's a cascade of other things going on through the different routers and the servers, etc. Is that more like the pheromone trial that sort of dissipates off into the distance, leaving records and how much of it's like other agents initiating because often, you know, every time you do something to some extent, yeah, databases might update. However, and this may change to what extent that is alive is another question. You know, is it is it is it sort of a computational form of niche modification that's going on in most cases in terms of the Internet. They're trying to do because it's that idea of the battle for your mind. They're trying to make it so that in most cases that the user has less has less thinking to do in terms of the choices they make and the design. There's this there's a great book in web design called don't make me think. And the idea is that you go somewhere and it's almost obvious what you need to do a bit like when you pick up a newspaper, you know that you're just going to turn the pages. So how how much of all this architecture in the background is to support giving you what you expect to be given when you expect to be given it. And how much of it is which you know, how much of it is actually what you might term an agent with with some sort of teleology. Well, you mentioned expectation there and expectation is something that agents engage in. So maybe it's even we can keep on forking this off. But what do we know from the active inference ontology? What defines an agent? External states, blanket states, internal generative model. What does the internal generative model entail? Well, it includes predictions about external states. It includes policy affordances. It includes preferences. It includes expectations over some of these things as well. So when we say agents, we can be referring to the whole Markov decision process or other ways of viewing the active agents. And then the expectations of the agent are really important because it's not just that psychologically it's going to determine whether the agent likes it or not. But it helps us connect these systems like the colony and subunits of the brain. So we don't need to use only the psychological interpretation. But then we can have a psychological interpretation where one seems warranted given the system of interest. But also we're going to be able to compare that and look for similar functional features in systems without getting into just some sort of abstract debate over whether it's alive or whether it certain feels a certain way to be that system. So Daniel, you brought up the generative model and I was just going to bring it up right before this. And so I've been really thinking a lot about the generative model and where is the generative model? Which is I think probably my favorite podcast episode that I've ever cut is going to come out next week. And it talks about the location of the generative model. And so it doesn't necessarily have to have a location because it's like software. Yeah, there's a copy of the software running on disk. Maybe it's in the cloud but it doesn't matter because it's the program that's running. So the location of the program is almost not even relevant. So I think about what is the generative model of the internet? What does that look like? Great question. Inactivism for cyber physical systems and embodiment in encodement. Maybe there's some analogous term to what would describe what we call 4E and the embodied extended encodered, etc. For digital systems because they do have important attributes that in a strange way almost suggest that some of these features that are suggested to be quintessentially related to having a body and embodiment also exist for software. Can I point something on to that? Say we think about rather than you've got the active inference generative model but say you've got an active niche modification algorithm. So i.e. it's still working off the agents being making choices. But what's happening is so for instance say you've got ABC testing happening on a website. So basically they send out three different campaigns with three different types of profiles. Now the user makes choices based on how the user makes a choice. They may be across the whole market. The ABC choices might get modified in that niche. But eventually maybe then actual clients get sent down certain routes. I mean a classic case of this is also what happens with those in-store cards. Because I actually ended up going, I didn't get go for it again. I did an interview with the Nectar card people in the UK once and they do these cards that you get in store. And basically I think it works out they have like a million permutations of offer that they can give you based on what your different buying options are. So in a way they're kind of, but they only serve you up then whatever you see one of those. You don't see what all the other people are getting. And some of them are small changes, some of them might be bigger changes. So in some ways it's like it's still like the niche is modifying to the agent's adaptive behavior. But the actual niche isn't necessarily doing adaptive active inference. But there may be some sort of active inference in a more kind of, I don't know, non-living way. There might be, we'll probably have better words and ontologically informed ways of talking about it. But there's kind of like low bar active inference or like austere active inference that is just any input output structure. It's going to be able to be fit into something like partially observable Markov decision process. So whatever it is that the niche is doing, it's emitting observations so we can model it under the certain low bar active inference versus the sort of anti-dissipative truly cognitive strategic active inference. And also there's no reason why we can't have hybrid models where there's like active inference agents in a niche that isn't an active inference agent. So like in this ant model, the niche is just a landscape of pheromone densities with a random dissipation factor. So that external niche could be made very much more complicated. And the observations that it passes back to the agents could be like customized to them. And that could be calculated by an active inference niche or not. And that's sort of the extremely interesting work of Axel, Constant and others with some type of symmetry occurring with the agent and the niche jointly inferring each other. So like what is the niche's generative model of the organism? What is the internet's generative model of the user? So it's interesting questions. I love that idea of symmetry especially between the agent and the niche and its relationship or potential relationship to the kind of Gaia theory that the niche has to expect or have an expectation of the organism that's going to interact with it or create it or modify it or exploit it in some way in order for there to be some kind of reciprocal interaction there. So super nice. Maybe we can come back to looking at this table and maybe think of other systems that might be interesting to go through and move on to some of the questions for 29.2. But wait, before we go to this one, I want to go to this one, hold on. So we were talking about the role of dopamine last time. Which slide? Sorry, this is slide... I skipped two. Sorry, I can't see. I think it's number seven. Yeah, number 11. Oh, okay. So we were talking about last time the role of dopamine and influencing the affordances. And this really kind of speaks to the role of reward in a generative model. And so we talk about one of the beautiful things about active inference is it kind of changes the reward structure in the model because there's some kind of reward that's given to actually finding the food. So there's some reward for finding the food, but also, and even in this model, so the reward structures has changed. So it's not just about finding the food. And maybe this is a good opportunity to look through the code maybe as well. But we think about the change in the reward structure and we place some value on the epistemic, you know, the knowledge about the model of the world. So if we can build a more precise model of the world, we can also gain more food overall. So I don't know if you guys have comments on that. Yes, Stephen, go ahead. I think also as well as like you say, building that model of the world becomes, it's an action model. So it can be what might be seen factually spurious but practically useful. So it may be really useful to have some bizarre ritual about how to get on a soccer pitch and try and kick the ball. If it does something better than what you're doing otherwise, go for it. And I think the challenge comes with increasingly esoteric rituals. If they're not related to the actual causal structure of the world, if they work for a moment, they're not going to be adaptive. Because if your only strategy is, you know, praying to the kicking God instead of practicing, it might work or it might indirectly induce some sort of outcome. But is that a long term strategy? But agree that the generative model is including a model of action. It's not just about getting precision about how things are out there, because what ultimately matters is getting precision on our actions. And then there's sort of these two parallel paths or two intertwining paths, which is do we take a reward-centered approach, is behavior dominated by reward, and then the other side of the reward coin is like the punishment aversion. So carry it in the stick. That's sort of a sister road, two-lane road. Do we pursue reward or do we avoid regret and suffering? And then a third road is this uncertainty minimization approach. So I pasted in this paper by Colombo, explanatory pluralism, and they review three prominent hypotheses of dopamine activity. So anhedonia, which is unpleasurable states, incentive salience and reward prediction error. And I think this is a really fascinating paper and it does apply to the ant context because they make a very specific system point talking about the role of dopamine in the mammalian brain, reviewing empirical evidence, and then also pull back to the broader implications for the philosophy of science. So they conclude that the evidence currently vindicates explanatory pluralism. In other words, these explanations do not subsume or knock out each other. There's a pluralism, there's multiple explanations that exist that capture different parts of how the system really works. The vindication implies that grand unifying claims of the advocates of the PTB are unwarranted. I could look up what that is again, but more generally we suggest that the form of scientific progress in the cognitive sciences is unlikely to be a single overarching grand unified theory. Little possible indirect reference to those who claim that the value or the utility of, for example, the free energy principle or active inference is just simply that it explains everything. I don't need to even say anymore or detail where I used it or show you the code or show you that it's better than another model. The fact that this model I can fit everything into is the utility and it will subsume other models and other frameworks. Now that's a contentious point that has to be either empirically supported or not, but it just is so interesting how dopamine as reward molecule or the other side of that coin with the punishment or as a precision molecule, the empirical evidence alone are not enough to disambiguate that. Trust the data, let the data speak, but then this is just showing us a kind of real time that our models of the world get projected as we do sense making on the experimental outcomes and it's not always as clear as it seems. It just reminds me, just from a very literal sense, if we think about dopamine as a reward for minimizing prediction error, how much do we all like to be right? All of us. Just for being correct, that in and of itself is a huge dopamine reward and I think about I'm a super ultra nerd and so I geek out on things like difficult math problems or solving a Rubik's Cube or these kinds of things and I'm just like finally when I get, or even code, finally when the code runs, there's this huge, like something that you've been working on for a really long time. It's like a huge like dopamine rush for me, which is why like I keep going back to doing these kinds of things, but ultimately that's a prediction error reward. So it's interesting to think about. And here's sort of the two ways that they're phrased and it's somewhat subtle, but I think it gets at the difference between reinforcement learning and active inference. Is there a drive for being right? And then we prefer to be certain? Or is the drive for certainty and then there's a preference over being right or being accurate? So are there situations where it actually brings us solace, even though the direction of the outcome isn't what we expected or isn't what on some sense conversation is what we prefer, but actually the fact that we're able to have certainty outweighs the fact that the outcome went the way it went. So maybe there's other phrasings too, but just again, which one do we put in the driver's seat, precision or reward? Well, so what's the difference between being certain and being correct? Like being right and being certain? Okay, if I'm really certain, but then that certainty is unwarranted, then that's when I'm wrong, but then I have this prediction error gap, right? So either way, so what's the difference really between being certain and being correct? Well, I think one, if I could just jump in there, I think one thing that's really interesting is if everything's based on energy minimization, like you've got a process, you're going to keep doing it and it's optimizing to make it. Basically, if I use less energy, I'm being more efficient, I'm being faster, I'm being smoother. So it always seems like, yeah, go for the reward. The thing is though, once you get into something where it's more complex or uncertain, the idea, and they talk about this now actually, there's a whole area in development work and design thinking called failing forward, fail forward. So the idea is fail fast, fail quick, but part of this is like, and I always say within ethical bounds, right? Because it might be great for the designer as long as it's not a community member being frustrated by this, but it makes sense in the sense that, and I don't think reinforcement learning can really account for this, is the benefit of doing something because it might be something which doesn't work, but now you can take from it. It's almost like you're giving yourself a sample of the counterfactuals in a way by something going wrong, but obviously we can then break it apart and say, well, there might be something right still in that and some bits which are more wrong and we can sort of do stuff with that information. Almost like succeeding forward would be better than failing forward. I mean, failing forward has this psychological benefit of helping people recognize that it's okay to not get it right the first time and just get something on the paper and iterate from there. So the rhetoric of fail forward makes sense, but having something that's even positive like succeed forward or win forward or contribute forward or learn forward or develop forward, those actually capture the functional spirit that is here. And let's think about how that relates to Blue's question. So what's the difference between being certain and correct? So there could probably be many takes on this, but I thought of this classic science textbook. You know, when they give you the science textbook, the difference between accuracy and precision. Now, in many cases now, this also is not coming from the active inference ontology. So they may be using these terms in a slightly different way, but the point is we're getting at two complementary components of precision. And basically, we can see that there's these two off diagonals, the top left and the bottom right. So high accuracy, high precision. It's like your average is on target. So that's the accuracy is where you're aiming at accuracy aim. And then the precision is referring to the tightness of the distribution around that mean. And so you could have high accuracy like it's always at the bullseye, but the precision is low or the uncertainty, the spread is high. You could also have like a thermometer giving you very tight readings, high precision, but it's systematically biased in a certain direction. And this is just one way to get at two different aspects. Now we've seen this in a few different cases in variational inference as being the difference between the expectations, the means of the distributions. That's like the accuracy. And then the precision is like the variance of that distribution. So you could have a very tight distribution, the Q distribution, very tight and close to the P distribution. That would be accurate and precise. You could also have a really overfit overconfident model that is just simply quite divergent from the underlying P distribution. So which one is being optimized? Are we trying to aim for a tight scatter? And then second order is we want it to be in the middle? Or do we aim for the middle and then the second order is we want them to be accurate? Steven? Yeah, that question is a good one. And I think distinguish, because often they might get conflated in many ways, but it's good to pull apart precision and accuracy. Because you could be very precise in the fuzzy set that you have, but it could still be fuzzy. You could have something, like you say, more accurate, but it moves around a lot. And I think that that's an important distinction that might be lost in many cases. Yes, and it's actually one reason why it's important to sort of stick with the uncertainty and pass the estimate, the state estimate as well as like the uncertainty around a state estimate, pass that through. Because if you collapse the uncertainty and you treat all estimates as being equally precise, there's this whole dimension of the data that has been dropped. For example, in gene expression analysis, sometimes there's uncertainty over whether a gene is differentially expressed or not. Like it has a distribution and you pick some cutoff, but then sometimes people take the genes on one side of that line and then they just move forward with them like these are the differentially expressed genes, as if it was a discrete categorization that take a continuum and then discretize it, which makes certain kinds of analyses really simple or straightforward, but a whole aspect of the variability of the data has been completely crunched out. Steven? Yeah, and this also then ties into scale as well. You know, you could, and how much, so like you mentioned the thermometer, so the thermometer gives a fairly precise sense of the reading. It's fairly consistent and it's averaging out. The average is fairly relatively accurate in terms of the average, but it's not going to give you the range or the spread within that, and that can apply to a number of things. And then you, as you go down to less and less, if you, for instance, I suppose like you would say, if someone's purely looking at one ant or one ant's behavior at a time, then you might have this sense of unbeing very accurate in terms of my ant analysis, so to speak. However, at the level of the, you know, the larger colony, you're not as accurate, you know, so there's some interest in sort of scale things that can't be avoided really. Are ants accurate or and or are ants precise, the nest mates? So it's almost like maybe we could think of them as being high accuracy, but low precision, like let's just imagine what that would be like for the colony. So each forager is in the neighborhood of making adequate foraging decisions, but there's a lot of noise around that. So over thousands of interactions and foraging trips, you're going to end up getting a lot of your points in the middle. So it's cheap because it's lower precision. So you can have simpler subunits, but it's centered around the target. Whereas if you have it centered over something that's not the target, it's going to be suboptimally adaptive for the colony. But I think ants just show that it's not about making every single action perfectly the right one. It's about being right more often than not, and then just doing it many, many, many times, which is related to that sort of develop slash fails, slash exceed forward. So just one thing. I think this is also useful just in the sense of, and it's almost implicit with the agent based modeling approaches that get used in biology, but it's quite interesting in the sense that that brings in that into subjective piece. What are these ants trying to do, for instance? How are they trying to organize what's going on? Rather than the first person perspective, what do I see? What's my perspective on the ant as an observer, which might be more psychological in our world, or what is it that I see the group doing on average, like the temperature, but when you start to get into agent based approaches, it does allow this second person stuff to happen, which is really, I know in psychology, is the missing piece really in a lot of cases. So how this in a way also ties into thinking about agents rather than thinking about what I see someone or something doing from the outside and inferring from their behaviors externally, qualitatively sets up a different type of experiment or modeling. So I wonder about the different scales, like as we scale from the nest mate to the colony, are they comparable? Are they asking the same question? The colony wants to know, will there be enough food? And the forager never asks that question. The forager always wants to know, where is the food? So they're asking very different kinds of questions, because the forager knows as no uncertainty that its job is to find food. And so the colony wants to have enough food, but the forager always wants to get the food. And I wonder how this kind of asking different questions scales to things like the brain and components of the brain, or the internet and components of the internet, like what defines a scale, whether we're asking the same question or not? Or is it the goal of the system that defines the scale of the system? I don't know. It just made me think about it. Steven? Yeah, I mean, that's a good, that's a really good point. I mean, yeah, you could, if you're starting to look from the agent and you could even think about that in the brain, like, yeah, if different parts of the brain are foraging information. So it's not like that would explain a little bit why you have these kind of parts, you know, you know, parts, you know, different parts of the brain have different personality queries going on even when certain parts of the brain get sort of put to sleep, so to speak. So rather than, yeah, I mean, that could be, that could explain, and if they're each working away, I mean, like a colony, and yeah, which one is in, some of them get to be in the driving seat and some just, yeah, some get to ask different questions or some get to sort of have their questions, influence other questions, and some of them just get on and do their thing, if that makes sense. You're a forager, basically you're going to forage, all right, that's the deal. But someone else, it might be like, I'm hungry. You've got to forage more. Right, come. Oh, just I rapidly assembling some image memes. It actually breaks down well into the chess and other fields, other serious games of tactics and strategy. So strategy, so like blue, your question, the colony was asking, do we slash I, we don't know what pronouns the colony uses. Do we slash I have enough food? It's as if it's asking that question. Does it identify as a entity, an I, or does it identify as an ensemble? We don't know. But then the Nesme is asking the more tactical and action oriented question, which at a first pass is like, where is the food? But we know that in active inference, what matters isn't just like simply inference on the location of the food out there, but what matters is inference on our action selection for how we're going to get that food or how we're going to implement a strategy that has a good trade off of us finding it. So like here, chess meme, tactics mean doing what you can with what you have. So like whatever pieces are on the board, you're going to have to do some sort of action selection for your next move. Here, strategy requires thought, tactics require observation. So that's kind of interesting because observation is one of our terms in the ontology, like the agent is receiving observation. Without observations, tactics cannot be implemented. And then strategy is requiring mental action, just like our previous discussions in 28. The thought is actually the planning over observation strategies. Nice, double blue. And there's other definitions of strategy and tactics and how they're similar or different. So that just kind of like cool that it's a way to think about maybe not just that are the levels asking different or the same questions, but might levels be defined or delineated by the difference in the questions that they ask. With independent of the spatial size of the subunit, the more tactical is like more agent level. And then the more strategic requiring reflection or cumulative culture or mental action, that level is being implemented by higher and higher nestings. Wait, blue, you're muted because you went down to just one. Can you hear me? Yeah. Perfect. I went down to just one so you can share the memes with us because I'm missing out on the meme action and I can't stand it a little bit. I'm so sorry. So speaking of action and different levels, so this is something that I've really been thinking about over the course of the last week. This is like the next question on this slide, 11. Is this prospect of intergenerational stigma? And I just wonder, as we were both foraging for memes and slide content, creating the 29.0, we have this internet, right? We just Google search for whatever and informational foraging I know is not the same as maybe ants foraging for food, but we're both foraging and we come across the same kind of content and ended up like putting it into the 29.0 in different ways and different places. And I just wonder, you know, there's this concept of intergenerational stigmergy that's very, and I wonder if this is human centric or if there are other examples of this in the animal kingdom and perhaps among ants. I know Daniel we've talked at length about the different foraging habits, for example, of different, different foraging habits of different species or of the same species of ant with different colonies. And so where do they get this habit? Like to forage differently, like at different times of the day or whatever. And it expands generations. And so is there some trace left in the environment? Like humans, we leave traces for the next generation. I mean, we directly do some cumulative cultural evolution transfer. And there are other species I know that teach like dolphins. I know teach like some foraging habits or so what other examples do we have of this kind of intergenerational stigmergy or like, you know, we leave traces like the time capsule of, you know, like what to find like every decade we open up the time capsule from, you know, whatever the previous decade was and find things like a tracks and like whatever things that are put in the time capsule that are just like, oh look way back in the day. So I don't know, what do you think about that? Just relative to ants and relative to intergenerational stigmergy and the different foraging habits of colonies from the same species? Good question. And where do the priors for foraging come from? And then to what extent is there scaffolding of one nest mate in their foraging activity by older nest mates which is actually within a generation that's like sort of some skin cells being scaffolded by previous skin cells. So like if you just kill all the foragers of a colony the foragers that step into those roles will perform sub-optimally. That's actually related to colony collapse in the honeybees. So there's some element of like they have an inkling of how to adapt their foraging yet also there's flexibility I mean it just makes me think of how we could actually study it like putting nest mates in different environments or having one colony and splitting it and training it a certain way and then reintegrating them and seeing if what mattered more was the environment at that generation or whether it was possible to have epigenomic or genomic changes that sort of lock in and shape foraging and then on twelve doing a little live meme modification like foraging is not an act but a habit with a probably misattributed Aristotle quote. So it's almost like foraging is not an act it's a policy it's a policy over specific motor actions but yeah the intergenerational part is very interesting and it I'm just thinking if there's other true intergenerational ant scaffolding one example not in foraging is like when a leaf cutter ant queen goes on her nuptial flight she takes a little sub-sample of the fungal garden and uses that to start the next generation which actually I think relates to the second point you had about 30. So that's super interesting and also directly related to the micro-transfer like you think about microbial transfer that happens like through the birth canal like there's a difference between C-section delivered babies and babies that actually pass through the birth canal just in terms of how they're inoculated as well as breastfed babies versus breastfed babies there's a huge inoculation difference with your just microbial flora and so that is the and I wonder what this speaks to the holobiont concept we're not just humans we're humans inoculated with all kinds of things and maybe ants aren't just ants they're ants plus that fungal and there's been studies on that fungus like on the fungal composition is it different from colony to colony do you know? I don't know I know that one group in my previous lab they isolated nest mates and then they treated them with antibiotics and changed their microbiome and they and others have also shown that there's a role for the microbiome and recognition so because it modifies the smell of the ants that modifies how they recognize each other maybe that's more like a dialect like that's kind of how they recognize that they're in the very short time range like in the same group or they're likely to be collaborative but I return to 8 to think about what you just described about like the body and the inheritance of microbial forms is there a way to refer to that human is the species but then if humans contain and are multitudes there's the body there's the other life forms there's social interactors if humans aren't the species but the host which ties very nicely to our next live stream which I'm very excited about with Matthew Sims he's going to come and talk to us about the symbiosis between squid and Vibrio fissurei in the light organ which is something near and dear to my heart I've worked in New Mexico but I don't know how is this possible in right upstairs you know you have those labs like you know everybody in the lab and you guys can come and work and like share equipment sometimes and so I knew this lab and our lab were very close and we had the lab of Michelle and she studied Vibrio fissurei and the light organ so I knew all these people doing directly related projects but it's a really cool concept that you know maybe we're all in some form of symbiosis and I know in mice they have sterilized mice and the mice grew up totally neurotic like mice lapping the microbiome free mice like they're completely neurotic and have way abnormal behavior or human behavior or abnormal behavior or neuro atypical behavior to microbiome stuff as well so it's really interesting to think about maybe we're all symbionts in some fashion I mean and like maybe we're not the species but we're just a host Reminds me of Demetris Bolas speaking about the through others we become ourselves and it was situating to have an influence in the dialectical tradition and you have a coming together of different pieces and they don't reduce their distinction they actually become especially over evolutionary time more polarized like the nuclear genome becomes more nuclear the mitochondrial genome becomes more reduced and mitochondrial or the host becomes on one trajectory and the parasite goes in the other they don't converge and equalize each other they end up becoming increasingly specialized parts in a newly emergent type of system and then these multi specified systems can be fragile because maybe now there's a failure mode of the part and then that causes the whole system to fail so like Bucky Fuller would always talk about how all cases of biological distinction are due to over specialization might be a bit of an overstatement but that sort of the idea is like generalists can deal with changing niches better because they're always able to hedge their bets amidst uncertainty and average out over all the different foraging investments whereas a specialist does really well until they don't but in life it's not just like you can do well most of the time and then you can fail sometimes living systems need to be succeeding at least adequately all the time if any part of your body was under producing ATP for like five minutes it would get tissue damage so it's better to be adequate all the time rather than hyper successful for even almost all the time and then what kind of strategies are resilient amongst that kind of uncertainty and situation so about halfway through what you were saying I wondered when we started having a political discussion because it just really speaks to specialization or maybe polarization which we see in society so like you know we have these different factions or different like sub types or sub classes of people and biological systems that don't become more like one another they become more separate from one another and I think you know in the resounding echo chamber that is a self similar political environment or class environment you know it becomes this echo chamber that just increases our certainty increases the validity of our own model and it makes us question the model of others so I wonder and this brings us back to what we're going to discuss with Matthew Sims next week is that right or is the generative model becoming more similar for the fissure eye, the fissure eye and the squid or is it becoming more different I just wonder so through interaction do we become more similar or different or what kinds of systems through interaction become more similar or different and then how can systems be more antifragile given that relationship amongst the subunits versus which failure modes are you de-risking on and which failure modes are you increasing risk on so it's an interesting question I mean it's not just political it is very broad question blue let's look at the code and maybe just scan through some sections of it and then start to see on 11 before we get to the code is just like what else could we include in an ant model like this was we talked about the natural history of the ant off of the formica species in the paper but we used one pheromone we didn't have interactions we had such a sparse pulling over of features of ants into our model so that's why it's just like a introductory framework so what other pieces could exist for ants and we can kind of use the active inference breakdown what perceptions could we add in tactile interactions, other olfactory interactions light the intensity the color, the polarization of light another type of polarization what type of cognition could we add in so this was like a one step decision making ant it just makes a decision for the next time step based upon the local pheromone density what kind of cognitive processes actually exist in ants or what kind might be interesting to see in different systems like memory then what actions exist here we had a movement action affordance any of the directions and then we had sort of pheromone deposit affordance that didn't actually have a specific control policy it was just a if then if you already touched the seed then you deposit pheromone if you're still looking for the seed then you don't deposit pheromone so we can think what are the perceptions, cognitions and actions of the ants as currently modeled in the paper and then where can we go with that and then to go from that question about the simulation to meet the empirical data the other side of the table what kinds of ant data already exist there are several very fascinating databases that include genomics gene expression, transcriptomic and epigenome or epigenomics datasets they're sparser for ants than they are for humans and mice but they are out there and there will be more of them in the future what kind of behavioral data already exists ethology the study of animal behavior video data maybe they analyzed it under a certain framework but now it could be readdressed and then like RFID and QR codes that can do semi-automated semi-throughput tracking so people attach RFID chips to honey bees and track their foraging activity and then we also have information on their ecology and morphology so features of which we're not in our simulation so this is just this sort of prime looking at the code by asking like how could the code be modified and then how could the world already be seen as reaching out to us with these helpful pieces of data and then where are we going to be in the middle like what kind of model is going to connect the empirical data sets that do or could exist to the models that do or could exist and then what questions are most important because we want to be asking relevant questions and so it's better to be keeping the goals in mind as we look through everything okay let me just re-crop this window while I get the code up anything you would just say on the overview or like general thoughts on the kind of multi-agent simulations or foraging simulations that you've made just I have some questions about this simulation actually or some things that I would think about to include as parameters like in a dream simulation where we you know could can simulate whatever with unlimited computational power and relative world data so I think that here I'm interested in nest mate interactions so I think that that would be something that would be interesting to model like the different types of ants that exist like the nurse mates and the foragers and the queen and the reproducers we went through all the different types of nest mates last time but I would just be interested in kind of simulating how as a colony ants interact and function I know that that's like you know very whatever big and I mean in most simulations that I've always done it's like the same type of um the same type of agent right like so you give the agent different options but it's never like this agent doesn't forage and this agent does I don't know but I mean I know that they do have different people to do some modeling with different types of agents but I just wonder what the compute power is and what the validity of that is that's great. Okay so we're gonna take a sort of a stepwise approach and one thing that I think will be fun is there's people who know way more about python and coding than both of us for sure including Alec, Chance and others and then there might be others for whom they've learned about active inference or they've read a lot about it maybe even looked at the math but here is gonna be where we see how it looks in code so there are these multiple languages and ways of communicating and learning by doing active inference like natural language math as a language programming as a language art as a language maybe there's other ways too but at the very least we know that we have natural language mathematical formalisms like the equations that we're always putting up and then here's the code those threads might be looking more similar or different as we start to learn more about like interactive notebooks and ways to integrate our language use of the natural kind with the programming kind so we're gonna do kind of a step down first we're gonna look at the config file so the readme file in this case is not super informative which is fine but the config file is where we're gonna start that's gonna tell us about what kinds of variables and parameters matter then we're gonna look at figure one and that's just sort of gonna be like okay this is how you run it and then we're gonna pull back to the ANS.py and look at what functions and subroutines are actually defined inside the code so inside of the config file it's only 30 lines long 31 lines long and it has some pretty clearly defined variables that are being set and the idea is that all the parameters are stored in this file so that you can kind of hit play and then it will look to this config file we'll see where that happens in a second and so the first block relates to if you want to add the ANS in a staggered fashion how many time steps should happen between adding new ANS in how many or what's the location of where the ANS should begin nest factor we can look at I'm not exactly sure it is here I'm not exactly sure what the nest factor has to do it might have to do it the distance of the nest but let's ask Alec on that one the grid is the size of the map so here's a 40 by 40 and then here is a very interesting block it's the focal area which is 3 by 3 so that's the grid around each nest mate is just 3 cells by 3 cells and so this could be changed to like 4 by 4 something like what happens when the ANS has a broader range of perception than it might have for action and that's the action map and so those are the actions that can be undertaken relating to basically moving like down into the left or up into the right the food location and the size of the food are provided then the wall so that specifies like within the grid what is the shape of the maze inside of the map 24 and 25 describe the number of discretized levels of pheromone that exists and then a factor that is the stochastic decay of the pheromone trail and then here the observation states so like the O in the active inference model are defined as the number of pheromone levels so in this case those are both 1 through 10 discrete models but it's so interesting because it opens up the door for the generative process to differ from the generative model so like the observations could be there could be 3 categories of observation low, medium and high and so you could still have a continuous variable or a discretized variable in the world in the generative process but the generative model could be over a reduced version of that which might be all you need to know maybe there's enough stochasticity such that just knowing if you're in high, medium, low tells you whether to put on a jacket or not you know you don't need 5 decimal points on the temperature the number of action states is also reduced to the focal size but just as mentioned that could also diverge and then the length of the simulation I think is specified in 31 so those are some of the parameters that exist just in the base simulation some of the degrees of freedom that do exist and that's even before introducing these like other sorts of pieces that could exist any thoughts here before we go to figure 1? Just like you know I mean I always want to know like why like why did you choose this or like did you play with the decay factor or did you play with like the length the time steps I'm sure like maybe your viewers commented that as well but I just want to like I'm always curious to see like was there any like unexpected outcomes or optimal outcomes that occurred like peak and did you pick 500 time steps because that was you know best or the decay factor of 0.01 because that was most realistic or those types of things Good questions and I think we probably could have organized these with some sub headers and some comments a little bit more clearly so we're going to look at how the simulation gets specified and then pull back to like what are the sub-routines so the top part of a Python program is basically pulling in and then giving nicknames to other resources that are required so these resources can include packages like NumPy which is a common toolkit used for math and statistics in Python I'm sure Blue can say more on that one and then there's this the config as the CF that's when we call like CF.number of observations that's going to call like this which is defined as this which is 10 but of course that could be changed so that's where the variables that are set in the config file come into play so the header sort of introduces the the background libraries and resources that need to be imported then a few parameters are defined so that you can have a base config set and then you can also just locally define some variables like the number of initial ants is you can define variables that are in the config file or you can add some new variables in and then basically for observations in the length of the range of observations this runs a routine and the routine is going to be defined better in ants.py but it basically says we're going to run it for that many steps for that many starting ants for that many maximum ants we're going to save it we're going to switch the location of the food and then we're going to just print some summary descriptors like the number of round trips and then save that in a file so that's like sort of the interface wrapper for running a simulation and then outputting some of the descriptive statistics and then also saving that file and then the UI is 353 lines long so it's a little bit longer but there are some parts that we can I think walk through the first is that like the ant is a class of object and it has a few routines it has its initiation routine which is where it's my position for example are given the starting location and then there's updating a forward and a backwards step there's the class of the environment so this is kind of like if you do net logo modeling, agent based modeling you have like the turtle, the agent and then the environment so here we're defining the agent and we're defining the environment and the relationship between the two of them is really whether we have a Stygmergic type multi-agent system or not so you could have the environment influencing the behavior of agents, like the sunshine is influencing the behavior of the ants but then the ants don't modify the sunshine so that wouldn't be Stygmergy but if there's a possibility for the agent to modify the environment and for the environment to modify the probability of agent behaviors there's some sort of bi-directional relationship and then it is Stygmergy and basically some of the functions that are defined for the environment have to do with which observations are at a given cell whether the food exists at a given cell whether the walls exist at a given cell and then the step forward and the step backward relate to the outgoing and the homecoming trip and this is basically just doing a delta which I think is the action map it's the delta in position and then it checks whether the ant is returning which is going to be I believe on the step backwards side and versus the ants going forward decay is a function that's also inside of environments because decay has to do with the decay of the pheromone at that cell and then the plotting has to do with just saving the file so that's some of this like the gif and the save fig another object type that's defined and this is really where there was the main carryover from the previous models is the Markov decision process so here just like we've talked about in various contexts like the a, b, c and d are all defined and initiated here those are used as you'd expect and then the mdp needs to have these functions that relate to setting and resetting certain variables and taking a step and then I love this line 210 action inference like this is sort of the kernel that is where it's happening here's the negative expected free energy and then there's a calculation of the negative expected energy of the negative utility calculations divided by the scale factor pretty interesting and I don't think we experimented with modifying this scale factor and then so can I just interrupt you what the scale factor do really well it's mentioned only two times in here and I'm not exactly sure what it does here I think that when you're comparing relative policies as long as you're dividing them all by the same scale factor it shouldn't change the relative preference for different policies if they're all being scaled down by the same level so I'm not sure why the calculation exists this way I mean it makes sense that they're all like whatever you scale it by something maybe it's just to make the numbers more manageable who knows yes let's definitely think about that and then we could get more explanation from Alec but negative expected free energies get one possible option would be this is like over observations so we know that the observations are discretized between 1 and 10 so if we divide by 10 maybe we're going from like a 1 to 10 scale to a 0 to 1 scale which then lets us interpret more cleanly as like probabilities because the action ends up being selected based upon the soft max of the negative expected free energies so instead of like 8, 1 and 1 if you do 0.8, 0.1 and 0.1 it's really easy to then say ok right 80% likelihood of choosing this action here's a colony level function related to an ant being initiated with a map plotting and saving functions and then here's the main so a lot of times in the python programs there's like a lot of upstream functions that are going to get called and then main is called and then when main is called that's like where the main program actually happens so this instantiates the environment the ants in the paths for the number of ants that can be specified that many ants are created with reset mdps all of the image files are set to 0 the descriptive statistics like how many completed trips in the ant locations are reset and then here is the time point time step based simulation so here for each ant in the ants that exist there's going to be some calculations this is related to the distance calculation here's adding the ants here's switching the food and then here is checking for each ant trajectory whether it's returning or completed and then saving the file so again I think there could be a lot more organization and commenting to understand what these different pieces do but yeah I believe that's some of the main pieces nice we're going to move to the figures now um sure I can actually try to download the videos and I can play the videos oh yeah that's great so yeah I thought we were going to go switch back and forth from the code to the so we'll go to the paper supplemental data stream directly will it play oh great it will play so here is like the ants you won't see it but the ants diffusing around the red ones are going out and the blue ones are coming home so this is just as if you were watching the colony with video video 2 I think we're going to add in the pheromone density so here the brighter colors are the pheromone density and like there like one ant made it home but then it didn't here comes another one it didn't um sort of lock on but now we can see there's a light ridge it's locked on to the trail and it's going to reinforce the trail and then it seems like the food has switched because now we're seeing some pheromone getting deposited on the left side and then there's not as much deposition on the right side but this is what really happens in ants like if there's high density of pheromone on that top right then an ants are spending a lot of their time there especially if they're carrying food they might actually be reinforcing that part so some of the videos are described their similarities and differences in the text but I think the interesting one to look at would be I think 6 and 7 so here here there's a preference for high pheromone density and then I believe it's 7 where the ants can lay down pheromone but they don't have a preference for it and so it ends up getting laid down but then it doesn't get reinforced because they they just don't prefer it so they act randomly with respect to pheromone um but these are just like fun simulations and yeah a total obvious next step an important one would be to explore what sets of parameters certain outcomes occur within like when do they always lock in when do they fail to lock in um but yeah good supplemental info awesome um what else is the next obvious step for you or where would you like to see this simulation go most like what is the next if you were to you know have unlimited time with Alec to say like let's code XYZ um what what would you uh dream of first I'd probably go for the annotation and clarification because that's like where coding expertise really comes into play and that helps other people modify and build on the code the most um when it were a little more reorganized I think it would make sense where other sensory and action affordances could be plugged in but the piece that such as such as like adding another pheromone so instead of just having like you know pheromone one through ten density like make it specific pheromone sent one and then that would allow us to go okay now let's just copy and paste the pheromone sent to and then we could have more flavors so what would you make the pheromone sent to do would you make it like pheromone sent one drives you to the food and pheromone sent to drives you home or would you just make different decay rates or what would you do with the different fair ones one duality would be like plus and minus if there was the appropriate behavior around using them another one would be fast and slow decay which maps on to the way that ants actually use the sense with like some longer chain hydrocarbons that are longer lasting and some shorter chain ones that are volatile another interesting cognitive model would be like there's two or three pheromones and then what the inference is on is the ratio and then it could be shown like no cheating the ant is only perceiving the ratio of these two things so that would be kind of it's kind of like bimetalism like you tie to the idea of a ratio rather than to just the absolute amount of one so like food could be a little sweet and a little salty or a lot sweet and a lot salty and then the inference of whether it's tasty is like whether it's balanced so then if we're thinking about the ant tasting balances in the world maybe that could help it have an internal manifold that by definition would be simpler than the state space of the pheromone densities out there in the generative process so I heard that there are two types of ants like that there are grease eating ants and sweet eating ants are they just like different what makes that distinction actually and like does the same ant really eat both things or what's up with that just reminded me of like a evil, neutral, good the classifications for characters and then what are the major macro molecules what are the nutritious molecules other than vitamins well there's like protein carbohydrates and fats lipids different parts of the colony life cycle might use these different nutrients differently but then what are the resources that could be obtained like sweet that could come from like an extra floral nectare so like a plant that's secreting a honeydew or scale insects and the carbohydrate rich fluids that they secrete and ants that love sugar and pies and stuff then there's oil and then there's also like protein like there's meat ants and little ants in the canopy forests in some rainforests I've heard from many ant researchers that the most effective traps use human urine and like those ants well it's high nitrogen high protein relatively speaking if you're a tiny guy so it's interesting why some species would prefer more of the other if you're putting on more biomass like if you're making larger colonies you probably need a lot more protein and lipids whereas if you have longer lived workers then you don't need as much lipid and protein but you might use a lot of carbohydrates so that's sort of the fun way to think about the workers like a car like you need the metal to make the car but then you only need gas so like you need the protein and lipids to make the car but then afterwards you really use more carbohydrates even though they probably also digest other nutrients but I think the piece that to connect would be like find some student or person who loves meta-analysis it could be us or it could be someone else and establish like which data sets are already there what's a recent impactful application that already has done a lot of the meta and analytic work of at least compiling data sets and there are several ant databases that have compiled behavioral data, morphological data ecological data, genomic data and then how could that data format play into this model what are we going to put in and then I think a good step would just be to show like what does the active inference ant model explain predict suggest that other first principles approaches like reinforcement learning might not perceive very nice so my question about the different types of food that ants eat was really linked to different pheromones like types or ratios or concentrations like if I have a high preference for lipids but I'll eat sugar maybe my ratio preference is different, this is another way to enable different types of ants or just to give them different preferences over the food that they eat and then where if we initiate with the same kind of agents and the same kind of food did they all proportionately distribute or do some carbohydrate-preferring ants go to lipids and some lip-preferring ants go to carbohydrates I just wonder and like does it have to do with proximity or the number of ants that were there before or I think that all of that is interesting and kind of relates to like this multi-goal aspect of active inference that's always really kind of difficult to I mean it's easy when we think about ant get food right like so the active inference model is easy it's manageable is what it is so it becomes computationally tractable and like mathematically tractable when it's a single type of agent with a single goal or even like two different types of agents with two different types of goals that are overlapping I mean so you can see just how exponentially expands when you have this kind of multi-goal thing and like some goals are like more pressing than others right like my immediate needs have to be met for me to like you know desire to go to college I have to have food water shelter clothing etc and so these kinds of multi goal and like prioritizing goal setups are really interesting and a great way I think to start to approach modeling real or us or complex systems that have multiple conflicting sometimes conflicting goals another setting to transpose and explore that where there's a ton of data would be first make active beference with the honeybees so there they don't use Stigmergy on the trail because they're on the wing they're flying but in the nest you could have like us in the hive as it were you could have a special dance floor and so that is where maybe like an active inference communication model could come into play so that this would be less on the Stigmergy side but more on like the multi agent distributed communication side so flip out the trail modification with a symbolic waggle dance whether it actually does a waggle dance or whether we just say okay there was a waggle dance that occurred and it had this precision and these estimates then honeybees are well studied for making the trade-offs between foraging for nectar, carbohydrate, pollen which has like lipid and protein and water and a lot of work on like is it that 100% of the variance in that foraging outcome being different is about the needs of the colony and all workers respond to the needs of the colony is it that the workers have their own nest mate level preferences and then the colony outcome just is what it is or is it somewhere in between where there's preferences that differ between workers maybe as a function of their genetic background but then the colony outcomes are also in feedback with them so that could be a case and I think this kind of takes us back to slide 8 which is like when we have the systems lined up and we understand their similarities and differences then we'll be able to transpose across different systems in a really interesting way and then ask kind of like biomimicry on behavioral systems is this more like a honeybee situation where people are exchanging information and then going their own way or is this more like a trail stigmergy situation and so like there's going to be some early work transposing into these major different means like again this was the first paper that transposed active inference and included stigmergy although that had been discussed by Axl constant and others regarding ecology it had always been qualitatively a part of the discussion that niche modification was relevant and that agents were able to be in feedback with their surroundings and then it went from the natural language through the formalism to the code and then now it can feedback what we think and write and speak and then we'll transpose into a few of these major domains start to subspecify into situations and it'll be very cool so it's super interesting about like the communication versus like the trail like which of these like and I just wonder what you think like as humans which are we more like, are we more like these or are we more like ants like do we communicate in a way or do we just like follow in a row like what do you think we're at in the be ants trajectory it makes me think of a drive that we went on this weekend and there was the street signs and then there was a big field and then there was the pleasure path that was cutting the hypotenuse across the field so it was like a big block with an empty field in it and so people were walking and so they were making that trail juxtaposition between the societally imposed and scaffold infrastructure like the streets and the symbolic signs stop sign here and this is where this street and that street intersect and then there's the people who were moving on that landscape and then even if you don't see any other people at that moment it's their stigmergy with the trail and so it is an interesting question we have symbolic rules and scaffolds like law and norms and culture that are even though there's ways to think about those as stigmergy themselves for the agent level that's more like a symbolic communication and then it's like the ant stigmergy is like blurring the lines that the symbolism provides for us so I think there's there's some components of both plus other insects maybe not insects too but that's quite contentious so that brings me back to the concept of intergenerational stigmergy but really maybe my question was not phrased correctly sure we leave some stigmergic traces as humans for the next generations but also we leave stigmergy for strangers so I wonder if like and I know that the ant colonies like the pheromones are different and the different species are different and so I wonder maybe is it like the fact that total strangers can go searching for collective intelligence or collective behavior or something like that on the internet and come to the same result or certain total strangers see the stop sign and know that there's some sort of behavioral norm that's supposed to be enacted just by seeing the signs in the street and so we leave this like you know we do this externalization of cognition a lot as humans and so this is what I want to see in the animal kingdom like I want to see traces of like can strange animals say like oh I see a rock stack I know the trouble is this way like does that happen in the animal kingdom or is that maybe uniquely human good question and I think this is why we can study these systems and gain insight without naively copying their strategies or thinking that we must be them or they must be us because whatever it is that we're doing it's at the intersection of like stigmergy and cumulative culture modification and symbolism so in a way the ant in the be foraging paradigms not hashtag not all ants because some don't do pheromone trails etc but one extreme is like the syntax of the foraging trail and there's no symbolic communication it's just stigmergy and then the other extreme is no stigmurgic modification there's no trace left in the environment at all there's just transient symbolic communication so it's this continuum from the transient symbolic like I'm going to pass you a message and then you're going to be off on your way to the stigmurgic and often sub symbolic like the path crossing that field it isn't directly a symbol saying walk on me but it isn't an in affordance we perceive it as an ecological affordance but it doesn't have it that's not what it symbolizes that's what it is and that's how we enact it so like we could have some types of active inference models that highlight that transient among agent meaning based communication versus models that don't need to take on any grandiose hermeneutic communication and what does communication mean and how does thinking through other minds happen we just look to the ground and about how agents respond to their niche which is a combination of biotic and abiotic nice fun times blue yeah just any final thoughts or anything else for me I've given all my thoughts out I think oh me too over the years but thanks for pushing for this paper for 29 and 30 is going to be hot on the heels slash abdomen of this paper because I think we're going to be able to operationalize a lot of the discussions that we had about extended cognition I know it's also going to bring back what we talked about with the definitions of a self and individuality so another one in the books on to our third decade of um papers on the live stream and uh yeah I hope people found it interesting as with other papers except this is one where hopefully they can get in touch with the authors and understand that it's a really open area and a collaborative one too awesome well thanks for giving us your time and always giving us your time thanks you too blue so I'll flip to the closing slide that's what we can still think about and uh awesome see everyone around next time