 Hello, it is March 9th, 2022. We're here in Acton Flab guest stream number 17.1 with Mao Abadasen. And Mao is a product director at Game Addict, a researcher at versus computational phenomenology lab and finishing a PhD in cognitive computing at UQAM. So we'll have a fun discussion today. Feel free to write any questions in the chat during the presentation and we'll have some great time learning. So thanks a lot for joining this guest stream and really looking forward to what you have to share. Yeah, thanks for having me. I'm really glad to be able to present this paper. We've submitted it and we're in the process of reviewing it. So it's a really exciting paper that I'm hoping we can get out soon. So today I'm going to try and go through a lot of big concepts on how they apply in active inference and then I'm going to try to explain the nitty-gritty of a generative model, how to make a simulation and then what are the takeaways and limitations. I'm going to try to keep this quick. If you have any questions, feel free to ask them and I'll take anything as it goes. So let's begin with the main concept here which is confirmation bias and conformity. This was the starting point of our research. We were thinking with my co-authors which are Connor Hines and Daphne Dominguez and Maxwell Rammstead. We were trying to think through what leads people to see the world in such a way that they have a hard time then reconciling their view of the world with other people. And so it went back to the practice of exchanging ideas, sharing concepts and values between different minds which is a fundamental process. It really allows humans and other living agents to coordinate and basically operate socially by sharing ideas, individuals, communities can really pursue their pragmatic goals. So basically, get food is one of the pragmatic goals you might want to optimize for. They improve their understanding of the world and then we notice that humans are compulsory cooperators. Human individuals survive based on the predication that they have access and they can leverage the bodies of a cumulative cultural knowledge. Over the course of evolutionary history humans have developed an exquisitely sensitive capacity to discriminate between reliable sources of information from unreliable ones which seems counterintuitive in the age that we're at but you have to understand how far we've come before we got to the arms race towards misinformation. So the epistemic process, as I just said, it's really not real flaws. Humans are not computing machines, they're not perfect. They mostly use heuristics. They process information by reasoning to limit the consumption of energy and facilitate rapid decision making. That's the basis of survival. For example, confirmation bias implies that all other things be equal, individuals prefer sticking to their own beliefs over changing their minds. It's easier for them to potentially change their model less than if they had to really all the time constantly deal with the volatile model of the world. There's a lot of research about confirmation bias and its relation to cognitive dissonance. We really don't like cognitive dissonance. We don't like feeling like the world is not what we understand it to be. And when we're faced with information that conflicts with our core beliefs, we do in fact experience cognitive dissonance. The tolerance for cognitive dissonance tends to vary across individuals. So there's people who can really tolerate a wildly volatile world and they accept that they don't know things and they accept that they could be wrong. And then there's personality types who are much less likely to take this dissonance and are more likely to try to avoid this dissonance. And so confirmation bias in this way has a social influence. Individuals will prefer sampling data from their in-group and will seek to confirm their own ideas by foraging for confirmatory information from their in-group. This ensures that they have access to like-minded allies and they choose to belong to communities where their deeply held beliefs are promoted and shared. This limits the cognitive effort that is already expended in the forging of information. This in-group delivery also influences how strongly information is integrated. So if a group membership is important for the individual, they'll tend to integrate way more information that comes from someone they believe believes the same things as them. For instance, you can think of news sources and Fox News. So people, it's not like people from different aisles don't know what the others are saying. They just tend to take what their preferred channel says and completely ignore what other channels are saying even if they've heard it. So the confirmation bias is echoed by another heuristic and we're talking about conformity. So that's the need to cohere with the beliefs of one's in-group. So it's kind of a push and pull, right? I want to be in a group of people who are like-minded and I want to believe the same things that I believe, therefore going in a group of people who are like-minded is the best way to do so. I also don't want to be kicked out from the group because the group has access to a lot of resources that is very good for me, right? It limits how much information any one agent has to gather to be part of a group. It also increases how much each agent is trusted. In-group agents can be way more precisely predicted by the other members of their group. This is in part due by norms and behaviors, scripts. We wrote another paper about this and generally these scripts tend to benefit the members of the group, right? That's why they also want to uphold these scripts. So being able to sample from the group entails a continued relationship to other members. If I lose access to this, I can lose access to information, food, shelter, and it leads to difficulties. If you want to read more, I can eventually give you all the resources. So this is a natural stopping point if you want. If you have any questions, if you have any notes, you can share them now. It was really powerful what you said about the losing access to the group benefits, especially from just the evolutionary and ecological point of view. What are the scripts and how does this current work that you're sharing relate to the work that you had done with narrative and scripts? Yeah, so I mean, the definition of scripts is not perfectly shared across the board. There's a lot of fields that use it, but the way that we've reviewed the literature and brought it together, we have weak scripts and strong scripts. So weak scripts are like clusters of connected concepts that tend to exist with one another in a sort of, yeah, it's like the building of a context with naturally co-occurring concepts. And then the strong scripts is how these concepts tend to sequence over time. So like for instance, there is going to be a natural entailment to certain events that they gather the concepts together, and then you're going to be able to exist in a space like this. And that's how you norm not only how people behave in a group, but how they coordinate. That is how you norm how they sample reality because let's imagine my script in this place says that I must do this, this and that. So I'm going to do this, this and that, which means I'm not going to have access to all the other things I might have done if my script had been a little different. So that's one way that we force people to only look at a subset of the available information in a given space. Cool, and then how did it relate or how does this research build upon what you had done those few months ago? Right, so the previous paper had formalized what a script means in terms of active inference. And basically you as an agent raise your niche, predict what you're supposed to do next. So it puts a strong emphasis on your available policies and the way that your policies are shaped. And then you're going to tailor your prediction error on the response of the nation. So then you're going to get to this sort of sequenced patterns of synchronization that reinforce each other, right? So the more I get the right pattern, the more I'm likely to get good reinforcement. And therefore I'm going to think that this is the right pattern for this group. And this thing is going to shape the niche so much so that if anyone even wants to or tries to enact something different, they may just get so much prediction error that people just will push back on it and you're going to stay within this sort of manifold that has the center points that gets all the states in. But the way that we're using this research to build on the previous one is that we understand that a script takes many forms, right? The weak version of the script is really something you could call an ideology, right? So if you take all of my weak scripts, really what this is is how do I map my observations to my state? It's really what it is, right? And so this new research is saying, well, agents are going to map things differently and thus they're going to end up into different groups by simple virtue of mapping things so differently that eventually it makes no sense for them to try to interact with one another because they couldn't synchronize anyway. So that's how this builds upon the previous research. Cool. Does that make sense? Yeah, please carry on. Okay, so we've discussed confirmation bias and conformity and we've discussed how they mutually reinforce each other. So to save energy, confirmation bias leads to agents being drawn to groups that validate their opinion and increase the probability of behavioral and epistemic conformity, right? So just believe the same things. And they form the basis of information spread. So agents spread information through media and through connections to one another given a network structure, right? And the spread of ideas and behaviors serves local and larger scale coordination. This is very much based on the work of Maxwell who has effectively discussed how active inference works across scale in nested mark of blankets that eventually build and become one another. So similar norms and coordination can be understood as synchronization which lowers the cost of information flow. This increases the certainty of the message being spread as well as the quality of this reception. So just right there, this increased certainty, this capacity to spread messages, this quality of reception you can see already how this might lead towards better synchronization, right? Because more certainty means you may have less attention to give to one thing in order for it to happen. And you can start thinking about other things and start specializing in other areas with the certainty staying up, right? The message will be more intelligent until group members who share the common set of codes and agents who are more likely to integrate new information if it fits better with their understanding of the world. So one example of optimization of message passing includes hashtags. They have been known to be heavy carriers of information in echo chambers. They were used in partisan ways to reach people of similar mindsets who understand what the hashtag means and understand that this is a signal for them to interpret what's being said in the light they want to interpret it. So the spread of information is optimized through hashtag as through the metalinguistic categorization markers. That was a lot of words. Really what it means is just that we see a hashtag. We know what it means. We know it has a lot of meaning without having to have a whole text for it, which is great for Twitter, right? So we labeled our communities in this study formed in the process of belief sharing as epistemic communities. Now this term already exists. It was already used by other fields, but we've tailored it a little bit to mean specifically that our communities that share and spread a worldview or a paradigm or normalized sampling behaviors. Individuals in the community are tied together by these practices which reinforce the social signals acting as evidence for the shared model of the world. So hashtags. The echo chamber formation is an extreme example of epistemic communities. Echo chambers tie people with similar views together and they tend to actively work against the engagement with external sources. They become very epistemically vulnerable when members can really no longer assess when an information is true or not. Only adding access to a few sources limits how much information can be gathered and relevant sources of evidence really fall through the gaps and the error will be propagated because it goes really well, really fast and is integrated really strongly. And so if the people can no longer tell whether it's false or not, then the system is much likely to be more vulnerable to misinformation and errors. They'll have a difficult time to check errors against anything. And most minds in the echo chambers will become more and more synchronized and thus always to make the same mistakes. So that's a pretty chunk right there. If you want, we can like take a beat. Yeah, I had a question. What does a healthy usage of hashtags look like? Or how do we compress information and context and perspective in a way that might not have some of these like threats or failures that you're talking about? So for the first parts, I have an idea but I've never read anything that specifically said this is a healthy use. So I'll venture a guess, but I'll start with the second part of the question that will give you a good sense. The way to keep communities healthy is to force them to be exposed to viewpoints they may not agree with, sources that they're not necessarily familiar with and that have more nuance takes. So the more you expose people to different kinds of people, different kinds of beliefs, the more likely they are to depolarize and be more open to possibilities. That's just a fact. Like we may think through our experience. No, you know what? When I see my conservative uncle, my racist homophobic conservative uncle, when he talks, my teeth grit and I stop listening. Yes, that's true. But you still are exposed to him. You still have to engage with his thoughts to some degree. And so your reaction to push him away, that's a natural reaction and you might actually do it, but if you're never ever exposed to his thoughts, you will never be able to actually, first of all, really know whether they're not true because you've never engaged with them, right? So that's one thing. So the guess about the hashtags is, maybe try to research hashtags that are not part of your community and try to reach people who may not normally listen to you in a language that is potentially less polarized. So if you really want someone, the best example of this is, oh, she's a YouTuber, she's amazing. Natalie, Wynn, do you know her? Not familiar right now. The thing is, I don't remember the name of her channel, but she basically specialized in philosophically deconstructive conservative talking points in a way that steel man them and didn't just destroy them on a sort of, value-based basis. It was more like, let's take these points, let's remove everything that actually really isn't true and take the kernel of truth in there and understand it through their eyes and reinterpret why this might actually still be wrong, but not in a way that was demeaning. And she reached a lot of people this way. She reached a lot of conservatives who were like, wow, okay, no, you know what, I get it. And it was very effective. So I think that's the answer to your question. Oh, carry on. Maybe, I'm sure we'll be able to bring active inference in, but it's awesome to be just building the context and seeing how there was motivation and a perspective on the work. Awesome. So yeah, so one of the elements that maybe will in fact give you a good sense of how we're bringing active inferences is the concept of volatility in habit formation. So stability in the group is not guaranteed. Optimal inference in a changing world requires integrating sensory data, right? With beliefs about the intrinsic volatility of the environment. So environments with higher volatility change more often and thus have shorter intrinsic time scales at which the agent has to keep updating their beliefs. And conversely, environments with lower volatility require longer time scale to start updating your beliefs. On the other hand, so having a better ability to track potentially important fluctuations in information requires to pay attention to smaller changes, but increased attention to environmental fluctuations will potentially lead to increased sensitivity to random, non-informative changes in the environment. So that might be called a higher false positive rate, right? So it's important for agents to have the right is that understanding of the volatility in their environment. One way to cope with volatility is to use the coping mechanism to constrain the uncertainty related to their own behaviors via habit formation. So here we are gonna model habit as a form of behavioral reinforcement where behaviors become more probable as a function of how often they are engaged. So an agent has engaged in a given behavior enough, a habit can then be formed and become hard to unlearn. And this is important as we understand both how parameters are set in our generative model, but also how echo chambers can form by habit formation. So in our model, behavior is driven by information seeking, which drives, it's a drive that leads agent to preferentially sample information from other agents with beliefs that they believe are similar to their own. So this, the way that I'm saying this, believe that they believe is important in active inference because the inactive inference, we're starting from the assumption that we don't ever really know. Things are kind of hidden from us and we have to infer what things might be. So I never really have access to what the other person believes. I think they might think this and I'm gonna try to infer their beliefs based on some observations they're putting out into the world. Now we've coined that down. Let's start with active inference. So we're getting to the meat of it. I'm gonna do a very short intro to active inference. If you already know, great. If you don't, then this is gonna be very informative. It's a biologically motivated framework that rests on first principles of self-organization in complex adaptive systems. So self-organization part is probably the most important part here because active inference is potentially less interesting for rocks, for instance, they tend to self-organize a little less. I'm pretty sure we could effectively use it for large models of astrology, sorry, astronomy, but effectively what you really care here is for things that tend to fight entropy. So it's pronounced on the notion that internal states of any biological system are statistically insulated from the environment that generates sensory observations. So basically they have an inside and they have an outside. And so they engage in inference about the causes of what gives them sensory states to really behave optimally to keep fighting entropy. So biological systems entertain a generative model of the latent environmental causes of their sensory inputs. So latent because I don't know really what causes it. I can only infer. So it's a little different from reinforcement learning or reflexive behavioral algorithms because actions taken under active inference are guided by these internal beliefs. And themselves, they are optimized with respect of an internal world model or representation of the world's causal and data-generative structure, what we call the generative model. Agents in active inference represent their own actions in their generative model by performing inference on both hidden environment states of the world and the consequences of their own actions. They can select behavior, which first achieves their goal or fulfills preferences. And second, reduces uncertainty in the agent world model. So this is where we get to the epistemic part, right? I want the world to be a certain way. And I also want to be less uncertain about my uncertainty of the world being the certain way, right? So an agent's main objective is to increase their model evidence and to reduce surprise. Surprise is bad. Like consider an agent which is in the world. If I want to predict myself surviving, I want to see things which are likely to make me survive. If I'm surprised by something, it's probably not likely to make me survive. And therefore, it's probably something I should avoid. Like for right now, I'm in my room. I expect the room to stay where it is as it is. This is good. If we're better, we're to appear on my right, would be very surprising. It's probably very bad for me. So processes like learning, perception, planning and goal-directed behavior emerge from this single drive to increase evidence for the agent's generative model of the world. So far, still good. The agents never directly act on sensory data, but they change their belief about what causes that data. This is a core step in active inference. It consists in optimizing these beliefs using a generative model. It's also known as Bayesian inference or Bayesian model inversion. This inference answers the question, what is my best guess about the state of the world, given my sensory data and my prior beliefs? I don't come into the world not knowing anything. I have some basic beliefs about how what's going to happen and then I'm getting observations and I make predictions about what might happen and then I act on those predictions. This follows Bayes' rule where the optimal belief about hidden or latent variable given some sensory data is called the posterior distribution. If you want, I can keep going into that or I can skip that one, but basically, you basically calculate the probability of something giving something else. The active inference perception, the generation of a best guess about the current hidden states of the world is formalized as the computation of a posterior distribution over some hidden states. And action, so the active part of active inference is the computation of a posterior distribution over policies. Policies are just a sequence of potential states of the world. Okay, now we introduce the actual model. Is it clear so far? Is the active inference part pretty clear? That's awesome. You hit on many great active pieces. The only piece that really, I wanted to mention was the difference between the observations, like what was said, and then the hidden, hypothesized cognitive states, it opens up a space between what happened and why somebody said that. So that comes up all the time in epistemic communities. What did the author mean by those words or what did the person mean by saying that? And so that's why the context is so important and active gives a framework that may help us model that kind of scenario. Exactly. And I mean, this is exactly where we're trying to go with these three searches that right now we want to model how people tend to have different views of the world. But the underlying phenomenon here, and we're going to try to dig more into that is that, or the hypothesis anyway, is that there is a literal different mapping between an observation and a state, which means for the same thing being seen by two different people, we're going to infer two very different things. And the funny thing is, then you come into the world of values where it's a whole other scale and stuff where we may be interpreted the same thing, but we'll have a spin on it. And that's been being like, no, no, I know what he meant. I just fundamentally disagree with it. And this is how we get into the web, these scripted webs of conceptual spaces where you see these graphs of concepts being connected, but at some point there's a rift. And that rift makes it so that I value this, you do not. And so we can converge towards understanding without necessarily being able to converge towards acceptance. So this is still very hypothetical though. One other note on what you just said there, it's like the difference between an orthodoxy and orthopraxis. So what is the similarity between conformity of thought and latitude on action, vice versa? And then in what way are the systems around us like aligned or not with our preference about those features of systems? Yeah, absolutely. Absolutely. I mean, I feel like this could all be encased in C, right? Like we can interpret the same thing, but my C is way over there, over here. So I'll explain C later. This made very little sense to a lot of people. So let's introduce the model a little more. We have individuals, agents who share information with one another and they come to form beliefs about their local environment and about the beliefs of other agents in their community. So to understand this phenomenon, we leverage the act of different framework. Here the organisms tend to minimize the quantity called variational free energy, which quantifies the divergence between the expected and the sense data as explained before. From this point of view, to select an action is to infer what I must be doing, given what I believe and what I sense. So there's been a lot of work done in this field on active inference to study social systems. I'm not gonna go over all of it, but there's really cool studies. I've just highlighted a few here where basically systems minimize free energy give rise to large scale behavioral coordination. Much of the work though is still theoretical. So it'll be interesting to see how experimental work can delve into this. I know Jonas Mago, Riddie Pitya and Maxwell Ramsey that are doing a lot of work on this. I think Laura's sense of it as well is doing some cool work on this. I'm just soloing on me. So at first glance, it might appear difficult to model a phenomenon like confirmation bias using active inference formulation. Why? Because remember, our agents are supposed to be guided by the principle of maximizing Bayesian surprise or silence, which requires to constantly seek out information that is expected to challenge one's world. Because the more surprise I consume now, the less likely I am to be surprised later. Right? Information gain is subjective. Epistemic value or the Bayesian surprise or information gain is always an expected surprise from the point of view of the agent. It's in relation to the agent's set of beliefs or generative model. And because of this subjectivity, the informativeness of epistemic value or epistemic value of an action can be arbitrarily far from the agent's expectation. So taking advantage of this, we gave agents what we refer to as epistemic confirmation bias by building a prior belief into the generative model that agents are more likely to sample informative information from agents with whom they agree a priori. And if the priori notion is important. So agents will sample agents with whom they agree under the belief that the agents are more likely to provide higher quality information. There you go. This has a key difference with some traditional opinion models. So there's been a lot of opinion models out there and not a lot with active inference but definitely a lot with traditional approaches. And so the implementation of bounded confidence to motivate polarization is usually hard coded in the agent's ability to perceive and thus update their beliefs. In our model, polarization is motivated by the positive effect of confirmation bias and is thus directly in the agent's model of the world. This allows agents to get more evidence about their environment. If the information comes from another agent that shares the same world view, they're implicitly motivated in their generative models to gain more evidence about the world if it confirms their preexisting beliefs. Also agents normally in traditional models can directly perceive the belief state of other agents, right? Directly perceive that's the key here. But in our model, this is pretty, we believe this is unrealistic because we only infer the beliefs of others. And thus agents do not have direct access to each other's belief states. They infer the belief states and change them through observation. There's been a few recent papers that began to build Bayesian models of opinion dynamics, which are motivated by the Bayesian brain hypothesis. The crucial point that distinguishes approaches like active inference and planning as inference from the generative Bayesian approach is the notion that actions themselves are inferred. Behavior is cast as the result of inference, specifically by sampling actions from a posterior distribution over actions. So actions are selected in order to achieve goals that minimize future uncertainty and to maximize a lower bound on Bayesian model evidence. So we're using this to supplement the goal directed aspect of policy inference, which is driven by the expected free energy. And this becomes habits when the prior over the actions becomes inflexible, right? So if the prior over actions becomes so strong, it's just always gonna be that, right? This prior is learned over time. And in the context of opinion dynamics, it can lead to a propensity to continue sampling agents that have been sampled previously. So it's this idea of choosing action through inference in accordance with the minimization of uncertainty. It's a powerful modeling technique because through the choice of policy inference, one can encode various social behaviors like conformity, habit formation, hostility and indifference. Here we really only did habit formation because we had to focus, but eventually this could lead to very, very powerful modeling tools for social inference. So I can stop here before I launch into the hypothesis. Awesome, with active inference, let's keep going. Awesome, so we have three hypotheses. The first one is that we cast confirmation bias in actor inference as a form of bias curiosity in which agents selectively gather information from other agents with whom they agree. Epistemic confirmation can mediate the formation of echo chambers and polarization in social networks, but we also believe that confirmation bias, epistemic confirmation bias and network connectivity will modulate the formation of polarized epistemic communities. Hypothesis number two is we consider the effect of agents' beliefs about the volatility of their social environments. So we try to see how beliefs about social volatility impact their sampling behaviors and which itself may interact with epistemic confirmation bias. So we hypothesize that beliefs about less quickly changing social environments will increase likelihood of polarization. And hypothesis three, we hypothesize that we can model selective exposure effects and conformity through habit formation, which emerges through base optimal learning. So we show that agents will begin to sample only those who belong to a particular epistemic community. A greater learning range for habit formation will lead to clusters within the network amplifying and quickening the formation of echo chambers. So the model. This is a multi-agent active inference model. The group of agents update their beliefs about an abstract binary state that represent two conflicting ideas and the opinion states about these ideas. Each agent also generates an action that is observable to other agents in the context of digital social network. We're using the Twitter linguistics, but this was, we basically tried to replicate distributions that were fine in previous studies on Twitter. Over time, each agent updates a posterior distribution of beliefs about which idea is true, and as well as a belief about what a connected set of other agents in the network believes. So basically what do their neighbors believe? And both of these inferences are achieved by observing the behavior of other agents. Agents can only observe the behavior of one agent at a time. So they have to specify this. Pick who they want to look at. All right, the agent model. I can go into detail if you want, but this is pretty, I know it looks like a lot, but it's actually pretty simple. These encapsulate observations, right? So this is the observation about what I'm tweeting, who I'm looking at, and what they're tweeting, right? And then the A matrix is mapping the probability of a state having caused this observation. So we have states about like what is, is the idea true or not? Is it ID one? Is it ID two? This is a bit blurry, but basically like which neighbor I'm looking at, and this is one is who I'm looking at, what do they believe, right? So it's pretty simple mapping. Here you have D, which is the priors. So what do I believe about the world before having any observation about the world? Then B is the transitions, right? So how are the states likely to succeed one another? Here you have the policies. So what kinds of actions am I likely to take to bring about this succession of states? And C are my preferences. So then there are other terms, but they're a bit more complicated. So I'm not gonna go into them, but basically these are the main things you have to know about a generative model. So I'm gonna skip real fast because there were little arrows and we could explain more, but man, just yep, all right, stop. Here, each agent, each active difference agent in the multi-agent simulation is equipped with the same generative model. A single agent observes the action of other agents. They form beliefs about an abstract, binary environment of states. And then they choose actions which themselves are observable to other actions. The focal agent consists of two simultaneous choices, an expression and an observation. So basically what am I tweeting and who am I looking at? An agent can only observe one neighbor at a time. So at each time step, a focal agent both tweets its own hashtag and chooses to read a hashtag tweeted by another single agent. So here we're saying hashtag again, this is just linguistic, right? We could have said anything. Basically these are numbers in a matrix. Effectively, in a much more complex model, we could have had them read a hashtag and a sentence and then try to infer what the sentence really says. Like we could have done something much more complex but for now, the point was just to make them try to infer a belief based on- I hope now we'll rejoin and then we'll continue. Hey. Are you there? Yep. So sorry, I don't know what happened. Oh, we get all kinds of surprises. We prefer surprises here. You can reshare and just continue. It was only a few seconds but it was great. When you say the right keywords and then the internet short circuits, it happens. Well, I don't see you anymore. I don't- Oh yeah, I'm here. No, no, I'm here. Don't worry. I'll share my screen again. Awesome. Actually, can I? No, let's just go with this. Okay. Can you see my screen again and I'm back, is everything cool? Great, great. Wonderful. We're back. Where did you cut off? Because I don't think I kept talking for a minute. We heard observing one neighbor at a time like and then how you could have modeled more complex interaction effects with the like multiple artifacts. Yeah. Right, okay. Awesome. Okay, cool. Yeah. Now let's explain the hidden states. So what are the states they're trying to infer? S ideas. So it's basically binary variable ID one, ID two. Is ID one true? Is ID two true? Meta belief. Which is the particular neighbor's beliefs about which of the ideas true. So does my neighbor believe this or that? The tweets. So what am I tweeting? And who should I attend to? Which neighbor am I attending to? So this was actually very complex. Daphne and Connor did an amazing job here because we wanted to change the number of neighbors. And so the matrices had to grow in relation to how many neighbors there were. And they did, it was beautiful the way that they did this. So shout out to Connor who's obviously a genius and Daphne as well. So now let's discuss control states. So control states are, which of the states can my agents actually act upon? So here it was the tweet. So I can decide to tweet, you know, hashtag one or hashtag two. And I can decide to look at neighbor, you know, one through take. Now, observation modalities, what can they see? They can see their own tweets. And this is important because the thing is, oftentimes you have to have a sort of identity mapping between your observations and your states so that the states can be taken into consideration in the actions. So here you have to know what you're tweeting. You have to know what your neighbor is tweeting. And you have to know who you're attending to, right? Pretty simple. Now the likelihood. So in a POMDP, in a generative model, we have to describe the likelihoods that determine how hidden states relate to observations, right? So the entire A array is a set of tensors with one sub tensor per observation modality. Each modality specific likelihood tensor is a potentially multi-dimensional array that encodes the conditional dependencies between each combination of hidden states and observations for that modality. So we also have a transition model. So the transition likelihood, the dynamical likelihood in B that I explained earlier, the environmental dynamics. So how are things likely to change? A higher value means the same idea remains valid over time. A lower value means the same idea is less likely to be true over time. So more environmental volatility. And then the dynamics of whether my neighbors are likely to change their ideas of the world. Then the next component here is the generative models and priors over the observations. So in discrete active inference, we represent these as vector C, D and E. So goal directed action is motivated by tilting to a baseline prior over observations. So that specifies the agent's preferences. So that's the C I was talking about earlier. What do I want to encounter in the world? What are the outcomes I'd like to see? And those usually act as kinds of rewards. Then there's the prior over hidden states at the initial time step, which is the D vector. It encodes the agent's beliefs about the initial state of the world before having made any observations. And in the current model, we make the prior over control states an empirical prior parameterized by the link function denoted by the ET vector. So I'm not going to go too much into this, but basically this implies that the prior over those control states corresponding to tweet actions dependent on the posterior of the idea. So basically your agents are always going to tweet exactly the same as they believe. They're not lying, but they could, right? We could change that. So habit learning. Underactive inference, learning also emerges as a form of variational inference. So it's not an inference over hidden states but rather over model parameters. So the parameter inferences referred to as learning because it is often assumed to occur on a fundamentally way slower time scale than hidden state of impulsivity inference. In this model, we use habit learning to model the development of epistemic habits. So as I said, the tendency to resample the same people. And so they simultaneously infer which neighbor to attend to based on the prerogative to minimize expected free energy. And they also continuously learn the habit based on the frequency with which they attend to a certain neighbor. All right, the actual simulation. So in a given simulation, we had several agents but we didn't always use the same number of agents. We were very limited by computational constraints. We would have wanted to make something much larger, much bigger to really model an actual social media but this is very computational heavy. In the future studies, we may try to increase the number of agents but for now we've tried to stay between 12 and 30 agents. Again, same model for each agent at each time step. Agents simultaneously update their belief and take an action. Each agent's observation or function of its own actions at the previous time step, as well as the actions of a selected set of neighbors at the previous time step. So not everyone is connected to everyone. So I don't need to see everyone all the time. Each agent has a fixed set of neighbors where the particular neighbors are determined by and let randomly chosen network topology. Okay, all right, so focal agents with a higher ECB believes that treats are more reliable if they come from neighboring agents that believe to share the same opinion, right? So like-minded people. The inverse social volatility is a parametrized focal agents beliefs about the stochasticity of the social dynamics. The learning rates is associated with updating the habit vector over neighbor attendance and network connectivity is the number, the way that individuals are connected to one another. It's pretty simple. So these are the parameters that we tweaked during the simulations. You'll see that they tend to interact with one another and they're related to the hypotheses we formed initially. So this figure is really cool. It visualizes opinion formation in a single active inference agent and shed light on the relationship between inverse volatility and epistemic confirmation bias. We use a simplified three agent setup here so we just were trying to test something. At each time step, the focal agent chooses to read a hashtag from one of its two neighbors and the two neighbors are not actually active inference agents here. They're just part of the generative process. They're sources of sequence of district hashtags and who tweet hashtag one or hashtag two. And as you can see, higher epistemic confirmation bias induces a positive feedback effect where the focal agent comes to agree with one of its two neighbors with higher certainty as it tweets the thing that it believes. Even if it's exposed to a sequence of hashtag over a hundred time steps, the agent receives the observation and here below each subplot, the heat map shows that the temporal evolution of the probability of sampling on neighbor one versus neighbor two over time shows that it clearly goes towards the one he started at prior with. The beliefs in more metabolic volatility lead to higher postures uncertainty about the idea but it still eventually leads to believing in the same idea you initially tried. So, okay. Now, this is much more informative to the actual phenomenon. And you can see here that we have two very interesting phenomenon, consensus and polarization. They emerge when simulating the network. We can see that every agent is generated by the action, the observations of every agent is generated by the actions of other agents. And so collective belief dynamics here under different generative parametrization really show the evolving beliefs of all agents about idea one or about idea two. In panel A and D, we see polarization where two subset of agents end up believing in two different levels of the idea, of the idea hidden states with a high certainty. So you can really see like they go very far. It goes one or zero. It's not like 0.6, no, it's really one or zero. They're really certain. And panels B and C on the other hand shows consensus where the whole network converges on the same opinion by the end of the simulation. This really showcases the rich phenomenology displayed by collectives of active inference agents which validates our model really alongside known opinion dynamics model and that was really able to replicate distributions that we were trying to converge towards. Do you have any questions? Yeah, maybe go back to the slides. What kind of dynamics might result in persistent diversity or not just the absolute consensus or absolute bifurcation, but like you did mention, you said you didn't see there to be a continuum of like people who aren't sure. So what kind of parameters or what kind of cognitive mechanisms would facilitate that? So lower epistemic confirmation bias for sure would potentially lead to more variety. We would look at two network structure eventually. Lower habit formation would definitely help as well as you can see here. And also higher belief in volatility. So if you believe that volatility is high, you're gonna keep sampling a lot of people because you do believe that uncertainty is gonna continue arising, right? So these are three parameters that really lead to more diverse and less polarized beliefs and some ideas. Because it's so interesting how you captured like the two things you can do on a classic social media platform, which is decide who to follow and then kind of at a finer scale, who to pay attention to. Even though it's also partially like put in front of you with the feed and the algorithm but also it's a little bit of agency in like what you pay attention to and who you follow at a slower time scale and then how you act and how that modifies the epistemic niche. And some people follow a lot and act little and vice versa and different people's cognitive models get updated in a different way despite having access to similar platform affordances. So these are really great points. In future studies one, you might wanna see how the niche responds. So right now that they're crafting their niches simply as a function of who they're talking to, right? But you could also model the algorithm screwing with you, right? So the algorithm knows you're looking for this so it's going to cut some things from you. And so you, like for instance, one way we could have done this is the niche could have determined, well, okay, well you're listening less to this. So I'm not going to cut that connection. I'm just not gonna let it be broadcasted to you right now. So that's one way that the algorithm works, right? It basically selects things, it puts in your feed, even though the people aren't gone, they're still there, you just see them less. So that's one way we could have done like a relationship with the niche. And as you said, yeah, we could have modeled different kinds of behaviors for people who are more workers, right? How do their beliefs change if they're not showing? So that's one way that eventually we're gonna be able to model like influencers. They're people who potentially broadcast to a lot of people and the other people don't broadcast as much. So the beliefs of these people are gonna be super influential as opposed to the beliefs of these smaller people who are much less influential. So that's, I mean, there's a lot that can be done. It's really cool. But yeah. So I'm not gonna be talking about this, the only problem in life is there's not enough time, but yeah, it's really cool. So anyway, I'm gonna keep going because we're going towards the cool part. So there was an interaction between epistemic confirmation bias and network connectivity. Here you can see a heat map of the mean polarization index across like a hundred independent realizations of the multi-agent opinion dynamic simulations for unique combinations of network connectivity and epistemic confirmation bias. So in the bottom, the selected plot lines show extreme settings of P. So P02 and P08 and Y, which is, yeah. So shaded areas around each line represent the standard deviation of the polarization index across independent realizations. So this is hypothesis one. We investigated hypothesis one or how epistemic confirmation bias, so Y, and network connectivity P determine collective formation of epistemic communities. We systematically varied both epistemic confirmation bias. So 15 values for epistemic confirmation bias telling the range three to nine and network connectivity. So 15 values for P telling the range 0.2 to 0.8 in networks of 15 agents and over a hundred different realizations of each conditions for a hundred time steps. So it's really a lot, it's really a lot, but the results are beautiful. And so we were trying to see how higher epistemic confirmation bias in sparse network might drive the emergence of epistemic community through the formation of belief clusters that are both dense and far apart in belief space, right? So they really believe different things. To assess the emergence of epistemic communities or clusters, we define the polarization index which measures the degree of epistemic spread in a system. And so a high value of P, so close to one indicates more spread out beliefs and implies clustering and echo chamber formation whereas a low P implies the network agents have similar beliefs about an idea or less diversity, less polarization. So it shows the effects of varying epistemic confirmation bias and network connectivity on polarization. It is clear in the first column of the heat map that highly spread out beliefs can occur at all values of epistemic confirmation bias in the presence of a sparse connectivity. Denser networks in general reduce the risk of polarization as seen by the drop off. So how increase epistemic confirmation bias can still counteract this effect to some extent by marginally bumping up the risk of polarization. So this is a really interesting result but it comes back to what we were saying earlier. A dense network means there's a lot of people who you can see the opinion of and who may not agree with you so you have more access to people's beliefs. Sparser networks means you're very quickly going to fall into an echo chambers because you're gonna cut off some people you don't wanna listen to and then you have no more connection with these people. There we go. So in this one, you see another heat map. So these are really noisy. We're sorry about that. We redid them for the reviewers but they're not ready. So they're pretty noisy but I'm gonna help you make a little sense of them. So here you have a heat map of the polarization index for 225 combinations of inverse belief volatility. So that's hypothesis two and epistemic confirmation bias precision. So the above rights heat map, I don't know if I'm pointing to the right. I, yeah. So is the heat map of the re-intendance rate for 225 combinations of inverse belief volatility and epistemic confirmation bias. And below left, it's a line plot of the most extreme rows of the polarization heat map. So below right is the plot line of the most extreme columns of the re-intendance heat map. So for hypothesis two, we modeled behavior under different values of inverse social volatility to see how it would interact with epistemic confirmation bias. We swept over epistemic confirmation bias and social volatility and we measured the re-intendance rates and polarization index for each configuration. It portrays a complex picture of the relationship between ECB and inverse social volatility. In the case of high volatility over meta beliefs, agents are driven to periodically re-attend to neighbors in order to resolve growing uncertainty about their beliefs. You can see a higher average re-attendance rate. Interestingly, there seems to be an interaction between re-attendance rate and epistemic confirmation bias, such that in the presence of both high volatility and low epistemic confirmation bias, the re-attendance rate is maximized. So they'll re-attend the same people, even if they have low epistemic confirmation bias, because, you know, they have high volatility, they constantly think that people are likely to change their mind, so they're gonna continue to keep attending these people. With a little bit of a lower volatility, they're not the lowest though, because if you have the lowest setting of volatility, you're just not going to care so much what other people think, so you're just gonna keep re-attending. So we speculated that the absence of ECB makes the epistemic value of attending to every neighbor pretty much equally high, and thus, agents are gonna continue revisiting neighbors sequentially with the attendance preference for any given neighbor solely dependent on the time lapse between the last reading of their last hashtag. So this is interesting. It will lead to diverse social attendance patterns, such that agents will prefer to constantly sample either new neighbors with no particular neighbor left or uncertainty-driven or re-sampling. So they can either sample everyone or sample the same people, but they'll try to attend to their neighbors a lot. Okay, so here we have the last hypothesis, hypothesis tree, we got there. It's a heat map of the polarization index for all 225 combinations of learning rate and epistemic confirmation bias precision. So above right, you have a heat map of the re-attendance rate for 225 combinations and epistemic confirmation bias. So the parameters represent the centers of the normal distribution sampled from across trials for each configuration. Below left, the most extreme row of the polarization index heat map, and below right, the most extreme column of the re-attendance heat map. So here we're trying to see if the polarization of networks via habit formation follows hypothesis three. We swept over ECB, so 15 values, tiling the range three to nine and learning rates, so 15 values tilling from zero to nine. And the networks were pretty small, 15 agents with a connection probability of 0.4 and an inverse social volatility of six and as before the same ideas of nine. So ECB was again normally distributed with a fixed mean and a variance of one. So we sampled from a normal distribution and eventually we lowered the variance and it gave us much better results, but the learning rate was fixed for the condition to really understand what the effect of it was. The learning rate incentivizes agents to re-attend to the same neighbor by forming a habit. This competes with the epistemic value attending to a new neighbor with unknown beliefs. This tested the hypothesis that a higher learning rate, stronger habit formation increased polarization. And here we can demonstrate that a higher learning rate and epistemic confirmation bias interact to influence outcomes at the collective level. So a higher learning rate induces more polarization implying the formation of more stubborn epistemic communities in the network. Okay, so do you want me to go into the takeaways? That'd be awesome. Okay. So here we showed confirmation bias as an epistemic phenomenon our agents have bias beliefs. We saw that ECB confers a higher weight to information that comes from peers and that's can lead to a bad bootstrap. We find that opinion dynamics heavily depend on network density. This was not counterintuitive, but the way that it happens was a little bit counterintuitive. The clustering phenomenon is exacerbated by adding the capacity to form habits. So this is important. We found that ECB, the presence of habit formation, exacerbated polarization due to the formation of echo chambers or type communities of agents that only read each other's hashtag. And the contributing influence of beliefs about social volatility to exploratory social sampling leads us to consider the role of norms and social settings. If an agent is incentivized via epistemic value or curiosity to pay attention to neighbors whom they are uncertain about, their social group could be a source of constant surprise as long as their beliefs about the neighbors are constantly fickle. So in other words, even in the presence of a group of like-minded peers, thank you. I've got burritos, I guess. Burrito, the most modification, the best kind of niche modification. Yeah, I know, right? Ghosts, burritos. So we expect that increased beliefs about social volatility leads to repeated reattentive peers amongst one another. Okay, we have limitations in this work. As I said, a very low number of people. We only modeled beliefs of two exclusive ideas. It would have been maybe more interesting to have more. And we would have wanted to notice the emergence of similar but distanced sub-communities as well, even if they're not already connected. So yeah, that was all of it. That was all of it. Cool work. Really fun to hear about. Nice. Well, if anyone's watching live, they can ask any questions. Otherwise, what would you like to start off by talking? Like, when you present that for what audience, what do people often ask about? They ask, the same questions you ask, really. They ask what can we do about social media? How do we factor in the algorithm? How does it ties into misinformation? I guess this question we haven't really talked about, the notion of misinformation. So that's something you're interested in. Let's hear about it. Yeah. So this information is kind of a tricky question, right? Because you'd think that misinformation would be, if it doesn't agree with us, if it's about something we don't agree with, we cast it instantly as misinformation. And in fact, we tend to fall very heavily for most sources of misinformation. The only thing is we fall for misinformation that tends to reinforce our view of the world. So even if it's part of a sort of conservative, if you're more liberal, more, I forgot the word for not conservative in the U.S. But if you're more left, if you're seeing some piece that's been crafted by the right, but it kind of confirms your view of the world, you're more likely to integrate it. And this is pretty important, especially when we're trying to think about the Q and A on people. And people whose views aren't necessarily completely polarized, but who want to view the world as something they can unfold, right? Something they have control over the epistemic value that they can gather. And so they're being fed information which feeds into that sort of meta view of the world that they can understand the hidden truth about the world. And so this becomes a sort of pool, where it doesn't really matter what I pitch at you so long as it gives you this sort of sensation that you have access to something that others really don't and therefore you now understand the structure of the world a little better. Even if when you take the whole thing together, when you look at the entire structure of conspiracy beliefs, they're extremely unlikely, extremely complex. And one way that we've noticed that we can identify these conspiracy theories compared to true facts, they tend to have a really interesting structure which is they're very sparsely connected to very large elements, but like normally in a true story and something that's real, even if there's a lot of elements or fields that are combined, there's a lot of connections amongst things, right? Because it's real, so you can really, you can see from a lot of directions these real connections, if you lose one connection, everything doesn't fall apart. In conspiracy beliefs, those connections are so sparse that if you lose one, the entire theory doesn't make sense anymore. So that's the kind of structure that we see. So it's kind of sort of, you can see how this would have epistemic value, right? Because everything is so surprising and uncertain that people would like feel a lot of value from accruing all this information, right? So I think that's where we fall into the bad bootstrap. But there are other kinds of bad bootstraps, like let's say, so I don't want to, I don't want to make any community feel like they're being targeted or that I'm being negative towards it, but there are communities, like let's talk about a cult, right? Let's think about a cult who starts believing a very niche set of beliefs. And the way that they do this is by cutting off contact to everybody else, right? Because it's much easier for me to instill a belief in you. There's nobody else to tell you that this is crazy, this is bullshit, like what are you doing? So one of the first thing is they tell you to cut off contact with a lot of people. They reinforce the beliefs constantly. They punish anyone who descends because that's how you reinforce a belief. And then you find them in these sort of communities which are like very only locally functional. And if you take someone out of that community afterwards and you put them somewhere else, they're like kind of lost because, well, that's not how the world works. Like they need to relearn a lot of things. They need to develop behaviors that are distinct from their previous belief systems that allow them to exist and be humans in a already pretty complex civilization system. That was very interesting about the resilience of the meta view. So how do we use active inference to study these different phenomena? Like how do you connect the nodes and edges from the graph, the generative model, the cognitive model? How does that visualization representation connect with like the phenomena that we want to model? I mean, that's the better question, isn't it? That's what we're working on at the computational phenomenology lab. I have a theory, but it's not like it's still being developed. But my sense is you're gonna start treating events as observations, right? And you're gonna start treating, so you're a node, you're here in the graph, and everything around you is a sort of observation and you're trying to determine layers in the graph which become states, right? So you're gonna move across the graph by trying to predict effectively what is the next state and try to see if this new thing that you're gonna observe matches with the state you're trying to see. So that's just how you move in the graph. And then the way that you're gonna establish what is true or not, so that's one way to do it. There are other ways, right? There's also swarm beliefs and stuff, but like one way to do it is, well, how much can I predict from this node? How much is actually connected to this node? And if we take this structure again of the graphs which are sparsely connected and don't allow for any connection to be lost, you're gonna start seeing that there's gonna be a lot of dense connections between concepts. And it's gonna be very easy for you to sort of find your way through the graph and be like, well, I can go there and I can go there and I can go there and try to make predictions about the next potential nodes given these two or these three that you already have, right? I have these three things and I know everything sparsely, it's densely connected, I can go there, it's easy. And if I go there, I don't lose any connection to anything else, so that's one way. Another way to do this is agents are gonna start forming beliefs by observing real life or reality. And agents who have a lot of observations about the world are likely to have a more accurate model of the world. So the more they make predictions about things that eventually happen, the more we can give them a reliability rating, right? So like this person sees a lot of things, one. And two, when they say something, it tends to happen. So we give them a high rating. And then the agents talk to one another and sort of coordinate their beliefs. And eventually there's gonna be this dude over there that says green and everybody else says red. And it's like, well, the probability of this green being true, there's a high surprise over there. The kicker is, there's a high surprise over there, so we're giving it a lower reliability rating. But if it does end up happening, well then the value is much higher as well, right? Cause we reduced a lot of variational free energy here. So like, I think that's how we're gonna get to, these two things combined with the graph leading to the nodes that you can connect together and the agents with reliability ratings that swarm towards a true belief are ways that we can integrate these things, I think. Cool, the first mode you had, I kind of wrote down, it was like dense connections, ideas that have grounding. Good maps for territories. More stable territory, more effective map. It's like in a family, if you went to someone else's family reunion, you shouldn't be too surprised that any given two people know each other because there's kind of a coherence or there's a prior about the density of that network. Whereas if the network is very sparse, then the coherence of it is susceptible to single facts being wrong. So you kind of point it to like where a thread will be tied, well, this person knew that person and then this money was sent here. And then it's like, if even one of those things weren't true, then the whole contraption is fallacious. And then like the dense connections have an element of conformity to them because you have an intrinsically validating world model like the epistemic network of epicycles and that way of predicting how the planets moved. And then you have insight from outside that goes against the conformity bias, the prior, but that can be how the community can move into a new space. So we can't have like just novelty-driven, like simply different is better, but we can't have simply the same is better. Which puts us in this active inference situation, which is why it's so cool to see epistemic modeling like you have all done. There's something really interesting here about the optimal space around which you start, computing the right amount of information. So Mark Miller is working on this, Alex Kieffer is working on this as well. I know that we're working on it with also Alejandra Assyria from Alara. Like there's this whole group we're working on like what is the optimal computation space, right? And so this is truly the question you just posed. Should I stay exactly where I am or should I try consuming error? And I think that's the beauty of groups and this is where we sort of fall into what a group is in terms of the market blanket, right? So in the inner spaces of a group, you'll have people who are less likely to start consuming error. They're just like, you know, they're just staying within what's being done and that's stability, right? And then towards the external parts of the group, there are people who are more connected to other groups, right? And then who can start consuming error? And this because the internal parts of the group are very densely connected to one another because, you know, they're an in group. This error can be passed in like it can be consumed in a way that is efficient for the group to deal with through this like sort of blanket. So the real question becomes like, what's an active state? What's a sensory like, yeah, a state at the level of a blanket for a group, but you can start to conceptualize it as like, you know, your influencers and your lurkers, right? Like the lurker is consuming information and the influencer is promoting information. And then this whole thing becomes sort of like, there are people who just consume what this one dude says and there are people who just like push out information and then maybe they take a little bit of information but they have like, you know, this sparse network of people who are like, like, you know what people who are like 90,000 followers but they themselves fall three. So then there's this like sort of layering of people who listen to others. And so the smaller people who have less followers, they're still connected to one another and there's degrees of through which I think information is passed and consumed. And so what Mark Miller is trying to say is that the computational optimal is where you compute enough error that you continue to grow in your sort of fitness landscape. So you're still learning about things, you're still discovering, but you're still stable enough that no error is gonna topple you, right? So because you have all these other people here who keep your group relatively stable, you know that consuming a little bit isn't gonna like destroy your group, right? It's like, think of a of a polycule, like we know we're in love, this new metamorph is not gonna like break our shit up. So it's good, right? So that's I think what allows like this sort of large-scale coordination among groups is that there are people who are like specialized in consuming the error and who can only do this because there are other people who ensure there's resources being produced for them. Cool, two thoughts. So one, the core to periphery or fringe continuum, it also happens like in ant colonies where there might be a brood chamber that is the most homeostatically controlled and then there's like a periphery out to like kind of a lobby or an anti-chamber where there is a lot more like in and out and then of course that's the interface like with the external world. And so then there's analysis with tracking of like where the ants go. And so there'll be some whose distribution spatially is more like the top parts of the nest or just the entry room and a little bit in. And there might be another one that spends more of its time towards the core and then it goes a little bit out. But that last piece that you said like, what roles contribute to colony or epistemic community stability or collective open mindedness or resilience? Like those are such critical questions that people frame qualitatively every day. And so it's just really cool to have the conversation and open it up to many different perspectives about what roles do play in communities and then try to use the model to follow up and describe complex situations rather than like constraining to what communities can be or how they can regulate. Yeah, and there's something beautiful about active inference here, right? Cause we could like, we can literally potentially measure or predict what might be an optimal state for a group given the free energy that the group can consume, right? So if we can understand how this consumption of error can be distributed over the group or at least can be absorbed. We can say, well, listen, we don't need to destroy your QAnon group. All we need to do is give you a little bit, you know, like enough in the right distributed areas so that you guys integrate it on wrong and potentially shift without us having to further push and potentially even pull right. It's like one of the worst things you can do with one of your QAnon family members is antagonize them because then they isolate and they're gone. You're never finding them again. Like they will be lost to that thing, which will consume them. And this is, and it's not just true for fringe groups. It's true for a lot of groups where we're like, we rely on that community for survival. As we said, like, there's a lot of communities out there who need to stay tight in it because well, it's beneficial for them to be tight in it. So let's say we're like a city, right? And there's nothing wrong inherently about being a city. And what you wanna do is to try to prevent a catastrophe, right? But we can't, we can't all be researching the types of catastrophes that might happen and like someone's gotta, you know, make bread, right? So you start having these people who you can predict over time that I don't have to think about this. I don't have to pay attention to this scale because I'm paying attention to this and I know that they can rely on that. So my model of the scales makes it so that my role is super clear and allows for others to make these predictions, right? So I think, I mean, it's not fully, it's not fully realized yet and there's a lot of work to be done still. But I feel like we have it, it's right there, it's right there. Here's something that kind of makes me think about like the city is enabled by specialization and all these other features. What are epistemic infrastructure projects? Like what are the thankless or thankful jobs that people do like moderation, governance, care for themselves and for other participants? Like what are those infrastructure tier roles where it's like this is building the road. So we have to contextualize how well this shop does in light of the effort to build the road, the community that contributed there. And so it kind of like opens up the idea of attribution and communities like very generative space. Yeah, so I feel like it would be difficult to say that there is a very clear distinction in any given infrastructure, right? So like every field has people who innovate, every field has people who just stay on tradition, every field has people who just provide. Like it's not just like bakers just provide, right? There are people who innovate in bread. There are people who think of new recipes or think of new ways or try to bring the same recipes to different parts of the world. So like it's more distributed than that. But I really do think that these structures kind of self-emerge, right? So like we all kind of have a, not necessarily, I'm not gonna make that. But I do believe that there's a natural instinct to try something new once you have the certainty that shit isn't gonna fall under you, right? And because we have this sort of distribution of like people who have more or less money and people who have had asked experiences of having more or less money, they kind of tend to focus. And so it naturally sort of synchronizes into, well, you know what? I was poor my whole life. I'm gonna be careful. I'm not gonna go crazy. And that's why the fallacy of the entrepreneur who had like nothing and he became something, like that is bull. Like it is so easy to be an entrepreneur when you have barely anything to lose. The notion of risk and capitalism isn't true. They can do it because to them the risk is minimal. If I lose everything tomorrow, I can do it again. So there isn't really a risk. Whereas if little lady over there who has like 20,000 in her bank account decides to put everything in a business, it's a real risk. It's a real risk. And obviously she's less likely to do it because like what happens to her children if she loses everything, you know what I mean? So there's gonna naturally be these tears of people who because of their state space, right? So the probability of them continuing accessing resources over time is less spread out. They have to stay, concentrate on this. And by virtue of doing this, they also, you know, keep producing value in resources for people who tend to accrue this and thus have more leeway to try shit out and innovate. I mean, there's a reason that it's so easy to go to university when your parents are rich. Like obviously, if you parents have the means to support you, you're gonna try and continue studying longer and longer. And it's much harder if you have to support yourself. It's not impossible, but it's so much harder. So anyway, yeah, capitalism. Well, to connect it to active inference, you kind of brought up the situational risk sensitivity and how that relates to someone's experience in their situation. And then in active inference, the policy selection for whether the binary decision go to this university or go to some other path or a continuous policy, it's broken down into that pragmatic or kind of outcome or goal reward orientation as well as the epistemic. And so what happens when people don't have to have anxiety or find out epistemically about how they're gonna get pragmatic reward for their action and how does that move us beyond the sort of like web to engagement maximization relationship with platforms where there's a very specific generative model, generative process relationship. And it's based around like a behavioral summaries of the person's time spent on the site or like they're spending or something like that versus like taking the cognitive reins and saying we wanna have lower anxiety and we want people to be exposed to different perspectives or some other preference for platforms that's specified using an active inference framework. I was a lot in there. So I'm gonna try to think of the generative model always. Generative models aren't one scale, right? There's scales on scales on scales on scales on scales. And what happens when you go across scales is that your states of one level becomes the observation at the next level and then it embeds and embeds and embeds. Okay, and then the embeddings can sort of like again track into one another. So I'm here and there's the bear so I'm gonna stop focusing on you which is a higher level I'm gonna focus on directly this right now and there's the bear and I'm gonna focus on the bear. But eventually the bear goes away because I was having a hallucination and suddenly I can go back to a higher level and start monitoring other things and notice okay well you know what this lasted a long time and maybe I should be cooking because I'm gonna get hungry. So I'm gonna slowly navigate the scales of where I'm supposed to put attention based on how the free energy is rising at a given point in this scale. All this to say for the web or for monitoring or for resource attribution. What am I really trying to do? What I'm trying to do is to make sure that a person never has to feel like they're constantly putting out fires. So anxiety is that right? Anxiety is the constant belief that you're like peaking everywhere and you have to pay attention to everything you're hypersensitive. So let's lower that. Let's feel like you can start like giving a little bit more attention to a longer timeframe or even a shorter one but just on the moment because like all of the parts of your model are pretty certain like there's not gonna be a fire. So like I have certainty across all my scales and I'm gonna start paying attention to just this now and like I don't need to like give too much to my model. How do you do this with monitoring? Well with monitoring you're like well look given this trajectory right now and given this personality type and given like what we know already about her she's gonna freak out in about five minutes about X and I'm monitoring this. So before she freaks out let's give a little reminder about this or like actually through IOT or whatever let's start something like my dream is like in the morning I wake up and my breakfast is like started like nobody's starting it for me it's just starting. So I do my house predicts the kinds of things I'm likely to need so that eventually I never like ever have to feel like I need to pay attention to anything with too much uncertainty is gonna rise anymore. I'm not sure I answered your question exactly but I feel like I'm going in that direction. So there's the preparing your breakfast when you wake up and there's sort of the cybernetics 1.0 version which is we have a prediction about what time you're gonna wake up and then we're just gonna do it like setting the thermostat we're gonna just prepare it for to be ready at this time then this is kind of taking that physical biofeedback layer to another level but also what should we epistemically forage for in the morning? How do we experience that sleep wake transition in a way where like we wake up and sort of like sharpen or become more vigilant and focus on the things that matter and that have impact and at the end of the day sort of have some unwind or some sort of relaxing process so that we can move into a healthy sleep. There was a, ah, there was this, I'm watching a lot of YouTube lately. What's YouTube? It's like, there's this one dude who tried to replicate a billionaire morning, right? It was striking what the billionaire does in the morning and which is basically like, I'm gonna do all this shit to make my body feel better and all that great and then he did this stuff to think about who he wanted to be today and what kind of qualities he wanted to embody and it's like, yo, I gotta think about not missing my bus or whether I'm gonna freeze my house outside or whether I might get fired if I get laid. You see what I mean? Like my concerns, my direct epistemic concerns in the morning are vastly different from what that dude has to care about because everything that I'm talking about, he's certain about, like he has no qualms about it. He does not care. Like, so how might one sort of shift this? Well, I feel like there could be structures that we could put in place in order for someone to not feel like they have to, you know, feel that but also maybe a focus, I know it seems hard to say, but maybe a focus on the things that he has to focus on would potentially limit the sort of, you know, constant need to feel like you need to put out fires. So let's say that I'm a smart interface and I'm like, well, this person needs to be certain about these things because their life has X constraints. Well, maybe what we're gonna do is give you X reports and look, you don't need to worry about it. It's there, you have that information. Now, what about you try to think about things that are longer scale? What about you try to think about things that if you had nothing else to care about, you'd start thinking about? So, you know, maybe these are the sort of epistemic foraging that would lead people to a slightly more peaceful life, but maybe I'm coming from a very privileged space and maybe this is clearly unrealistic. So you'd have to ask people who are much less privileged might have a different view on this question. Awesome points. Just in our last few minutes, like, what are you gonna continue to work on or what are you excited about or what do you wanna talk about or ask anything you wanna talk about? Well, I mean, you mentioned your paper but we never got to it. We have time or? Sure. So with a handful of co-authors, we wrote a paper about using active inference to study specifically the epistemic community of science. And so we, as a group read and shared your paper on epistemic communities and then took the idea of like this epistemic entity. And now we've seen active inference entities be at different scales. That's several years and many like different things we've talked about there, including the narrative scale. And so we thought, well, how could we model the scientific epistemic ecosystem? How could we do a system's description of the scientific ecosystem? So like the entities, like the person, but also the lab, the university, the DAO in the case of decentralized science or like web three or open science, how do we model the active entities? And for the active entities, eventually flesh out a cognitive model like you've presented today, but also include the informational entities which map not onto the adaptive active inference agents but the mere active inference agents. So we can still model the paper as like an entity, but it doesn't have any policy selection that it can engage in where, or like a code can engage in limited policy selection when it's used by computer system. And so the goal was to develop an ontology, the AOS, the active entity ontology for science that would be able to link up natural language descriptions. Like I sent a message to my friend, person, friend sent a message. So have the natural language representation with the graphical flow chart representation, kind of like a social network of different kinds of entities including the informational ones and then be able to connect that to a simulation architecture. So sorry you cut off, that's all. So, okay, so you were gonna try and simulate agents using your ontology? So basically like. So the first AOS paper from last week, we focused a little bit more on the conceptual and the history of decentralized sense-making and history of science and an introduction to Web 3 and Deci decentralized science. And then some of our ongoing work and collaborations are to go from the flow charts that we kind of sketched out in some of the figures like a funding motif or an information sharing motif or a communications motif into a simulation that uses the active inference entity model. Yeah, wow. You really just talked to Maxwell about this, like he's been trying to think about how communities react to active inference specifically. And this makes me think of Lakatos, right? Like how there's this core and then people test out the external hypotheses until they reach the core or if they don't reach the core, it expands. So there's something really cool there. I can't wait to see more of this work. Awesome. Yeah, I hope that we can continue strangely attracting and intertwining and including just collaborators from different backgrounds in the discussion as some people who are listening might be hearing about active inference for the first time and might be connecting with like a community in us that likes to do this kind of modeling and framing. And also there's people who might be more familiar with the active inference modeling and hopefully this like brought their attention to a really important and contemporary case and of course, excellent recent research. Yes, let's do a interdisciplinary research for the win. Cool. Anything else you want to add? Oh, no, we've covered a lot of graphs. I know, I know, seriously, the time goes fast. So thanks a ton for joining. See you around. Bye.