 Hello everyone. Welcome to the Active Inference Livestream. It is Active Inference Livestream 11.2 on December 29th, 2020. This is our last group, Active Inference Livestream of the year. So thanks so much everyone who is out here today, and also watching live for those who are, and I will share my screen for the participants, and here we go. Welcome to the Active Inference Lab, everyone. We are an experiment in online team communication, learning, and practice related to Active Inference. You can find us at our website, on Twitter, email, YouTube, or our public Keybase team and username. This is a recorded and an archived livestream, so please provide us with feedback so that we can improve on our work. All backgrounds and perspectives are welcome here, and as far as video etiquette for livestreams, mute if there's noise in your background, raise your hand so that we can hear from everybody on the queue, and we'll use respectful speech behavior. As I stated, this is the last Active Inference Livestream for 2020, but in 2021, it's already shaping up to be a great year. We're gonna be meeting Tuesdays from 7 to 9 a.m., Pacific Time, PST, or Pacific Time when the time changes, and you can go to this link to learn more. When you go to that link, you'll see this spreadsheet, and it has instructions on how you can participate, as well as which papers we'll be reading in which weeks. So you can see that many authors will be joining us, and hopefully everybody will have the time to read and digest these papers. Today in Active Inference Stream 11.2, we are going to have introductions and warm-ups, and then we'll go into the 11.2 itself. We're gonna be continuing the discussion of Sophisticated Affective Inference, the paper of HESP et al. 2020. Welcome, Sasha. We'll run through again the aims, claims, abstract and roadmap, and the figures. And happy New Year's, everyone. Thanks a lot. We'll do new stuff in 2021, and so stick along. Let's get to the intros and warm-ups. We will introduce ourselves just by giving a short introduction or check-in, and then passing to somebody who hasn't spoken yet. So I'll start. I'm Daniel, and I will pass it first to a first-time visitor, Scott. Hi, I'm Scott David. I'm at the University of Washington Applied Physics Lab, and I've been a lawyer for 27 years. I'm trying to use physics principles to help structure social relationships. And I will pass it to Stephen. Hello, I'm Stephen. I'm based in Toronto, currently doing a practice-based PhD with Canterbury Christchurch University, and I'm looking at applying the dynamic process of Active Inference into my work with participatory theater and community development, and helping to understand the dynamics of all of that. And I will pass it to Ivan. Hello, my name is Ivan. I am in Moscow, and I'm a researcher in System Management School. Pass it to Blue. Hello, I'm Blue Knight. I am an independent research consultant based out of New Mexico, and I will pass it to Sasha. Hi, I'm Sasha. I'm based out of Davis, California, and I study developmental neuroscience, and I will pass it to Alex. Hello, my name is Alex. I'm in Moscow, Russia. I'm also a researcher in Systems Management School, and I'm trying to find the ways to connect Active Inference and Systems Engineering framework. Thanks. Cool, and Ryan, it seems like has the same situation as last week, but definitely for 2021, we'll figure it out. I'll just tell him it still does not work. All right, let's go to the warm-up questions, and again, if anyone else joins, we'll just roll with it. So for these warm-up questions, raise your hand. It'd be great to hear from everybody. Let's just start off with what are some kinds of agents that we might want to model in Active Inference? So just even before reading this paper or after reading the paper, what was the kind of system or type of agent that we might want to frame in this way of thinking? So Emotions makes us think about individual people, but there's other kinds of systems we might wanna model as well. Actually, Scott, you were mentioning some interesting types of agents we might be considering right before we started. So maybe could you pick up there? Sure, I was mentioning in law, it feels like one of the things, one of the strategies is to introduce a provisional agent, a nominal party into an interaction. So if you have two people interacting, you can introduce a third. By introducing the third, it changes the system because now a three-party system with three-party concerns, and that system then will have a different Markov blanket as I am beginning to understand it. And so in that context, you may be able to do things you can't do in the other, through the other Markov blanket because it may be able to, an example is in tax, there's a thing called a like-kind exchange where I can exchange commercial property for another commercial property tax-free. Well, very often you can't find someone with exactly the property you're looking for. Let's say you have a 20-acre farm, you want a 30-acre farm. There's nobody around with a 30-acre farm. You use an exchange facilitator, and that means there's a third party in there. They take possession of both properties and act like a mixing bowl. And so putting the details of that aside, it essentially changes that system. So it changes your Markov blankets. And so one of the things I'm wondering about is the intentional shifting of the Markov blankets to increase solution space. Cool. So we have all kinds of agents who are gonna be interacting and potentially by shifting the boundaries, the interfaces between agents and wrapping them or sub-partitioning them, it's possible to make new kinds of swaps within existing infrastructures or new ones. So anyone can just raise their hand. I'll just put up the second question, which is another general one, which is how can we learn and communicate the technical aspects of active inference in an accessible fashion? Stephen, go ahead. I don't know if I'm gonna explain how to do this, but I'll explain the challenge, which I think it's important to overcome it because it's so valuable, is I was just in a conversation and trying to talk about systems and how we think about systems when we engage in communities and thinking about how we understand the world. And there's a lot of stuff in the world of kind of the social arts and participatory theater where it's like we need to do something with the system. And then it flips to we're all a living system the body knows and we just know how it is. We just need to listen to the body and it will know. And it's both, we're in this dynamic niche of which there's an awful lot of stuff we don't know, yet we can tap into our body and the way that our body works and there's a way that we construct meaning in the world. So I suppose trying to find a way to take case studies and show how it can help will be really useful. Thank you, Sasha. Yeah, thanks, Steven. I really, I agree with what you said and ironically or I guess in a very meta way active inference is the answer to that question because through like closing the loop and introducing action into the system we can better understand thinking about the system. And so I think that's a really great point that case studies and really specific applications of active inference are gonna be the best way to teach and learn about it. So yeah, I think that's a great point. Cool, Blue. So just to kind of piggyback on that, I think and maybe tie in some of what Scott was saying earlier, when we think about modeling in active inference it's interesting to think about scaling this not in the way that we talked about when we were talking about scaling active inference but in the way that like we have several different systems even just existing within the body, right? Like the input and output of the stomach and the spleen and the liver and also like the brain and the somatic awareness, right? To speak to kind of what Steven was saying earlier. So it's interesting to try to think about modeling systems within systems in active inference. Yep, and actually this brain and body dialogue and the multi-systems approach in the way that it doesn't say, well, everyone told you the brain was important actually it's the body. No, it's actually it's both. And similarly it's like quantitative and qualitative. And so it's not just that we're going to throw out one for the other but we want to think across that division and integrate them. That's the beyond internalism and externalism integrating the sciences with the humanities and other things that we're kind of talking about. Scott? I just put in the chat, there was an article that came out on December 4th in Science Magazine called the Exploitative Segregation of Plant Roots. And so this helps with that second question learn communicating it. What it was talking about is the way that plant roots grow specifically you have two trees next to each other each has a system, each has a Markov blanket. They're growing next to each other and they differentiate their plant growth roots to optimize statistically their nutrient uptake. And so they don't go and intrude on other plants roots space because that would not be optimized for them because they'd be putting more resources into growing further roots and but getting fewer nutrients because they'd be nearer the other plant. And so to me, I looked at that and I went, wow that's a real nice way to understand the dynamics of two different organisms interacting with statistical gradients. There it's a nutritional gradient specifically that felt to me like it was a nice way to kind of introduce that idea of the contact between the two different systems. Cool. So what I'm gonna take from the second question and all the answers to it is that through examples that are accessible and at hand we can all start to learn more about the relational side of active inference and also the statistical and the quantitative because both those sides are gonna be learning journeys for everybody because no one's gonna know all the statistics no one's gonna know all the deep relational pieces that are so familiar to performance artists in other areas. The last question on the warmup is what is something that you are wondering about or would like to have resolved by today's discussion? So one thing I'm wondering about is was raised right before we started as well is like how do we think about emotion or this parameter in systems that aren't us? And I'm also even wondering whether the emotions we experience are these parameters and how do we name parameters or talk about parameters of mathematical models in a way such that they're recognizable and capture meaning but also don't constrain or sort of falsely limit what it means to be that term. So we can get into a map territory issue when we're talking about computational models of emotion just like we could with consciousness blue. So just to kind of piggyback on what you were saying about emotions like I don't know as I'm starting to study more about emotion like I'm a logic and science kind of researcher and so but learning about emotion just reading personally a lot of it seems to be like non-verbal, right? And so how do you like quantify or even measure emotion on a scale when it's really like you're feeling some primitive thing from when you were left in your crib as a child that you have no words for because you didn't have words for it at that time. And so how do you kind of like impart some kind of meaning or quantitative scale on something that maybe there's not even language for? Yes, it's almost like the discredited but guiding three part brain model. So it's like the most prefrontal is the numbers on that level of formalization and then a little bit lower we have language but then below that the substrate is actually experience and that's the embodied experience that may or may not even have words to express it. And then we can look at expressions like poetry or speech and is there a way to get more precise or is it merely an expression that's arising from an emotional state but the emotional state isn't a number it's not a words it's actually something that's more deep. Scott? I was wondering if with regard to dance and performance whether there are ways of communicating because we talk about embodied I wonder about encoding so could emotions be encoded in dance in a way that they can't be encoded in speech or other movement so that it becomes a surrogate or kind of that mirror neuron idea that you're projecting out an emotion by the projection of your body position the morphology being the encoding of the emotion. Cool question. Stephen, definitely. Yeah, I think this also relates to this question of affect and dimensionality so maybe there's a way that when we take a perspective on things we kind of look at a dimensional approach but it can be encoded in some sort of spatial affective way and maybe those dimensions aren't they become like dimensions but the encoding is not as linear as that a bit like color space when we see colors the space of colors you can encode it in sort of in lightness and two different types of color dimensions but the way that it's encoded is when people look at perception it's not linear in that space so maybe there's like an affect space which somehow encodes in a way which is non-dimensional. So might, oh, sorry. Go ahead, go ahead. So might aesthetics be provisional efforts at encoding? Very nice to bring in the aesthetics and the intuition because intuition is that which can't be expressed it's sort of like the dark matter it's the submerged part of the iceberg but those reflect the deepest priors potentially that help us search through the state space that as we've been exploring is vast to do the exponential rollout of a chess game is gonna be hard. It's gonna be more basically states than you can keep track of. However, if you think I feel good about this arrangement of the pieces then with just a simple affect and a limited rollout you may be able to do very well. So again, Ryan we're not sure but we'll figure this one out. Okay, let's just one other thing. Is that like the holographic principle potentially that on a system the information about the system is displayed on its surface? Is a Markov blanket potentially that surface that you've observed to understand the system inside? I believe so and Chris Fields who will be coming onto the stream will be talking about paper that's gonna be in early March. Chris has done a lot of holographic Markov blanket research. So let's keep on thinking about that because that's very interesting. I'm not exactly sure but yeah, hopefully we'll figure that one out within a few months. So the paper itself and then again, we'll have enough time just to hear whatever people are thinking but a sophisticated affective inference. We talked about it in 11.0 and 11.1 and the paper is about combining these two previous threads of research which were sophisticated active inference which is sophisticated meaning deep through time with affective inference or active inference where there's an emotional state component. The abstract, they state that and the roadmap of this relatively short paper is pretty short. So today, let's think about how these two threads which are distinct threads that's why they each with their own paper can be combined into something that is in front of us today which is this paper that is literally summating these two threads of research and then we can think about a few directions that this research makes us think about whether developing more advanced simulations. So just asking which parameters of the simulation could be improved or how could this be expanded or generalized but also on the more functional side we can look to computational psychiatry because we're talking about emotions if we are and also even non-human agents like robots. So a few different directions we can go and to structure that discussion it could be useful for us to have a few examples like two or three or four of sophisticated affective inference agents conventional or unconventional. So certainly one of them could be a person but we'll think of that person being in a certain context and then we'll return to these examples that we come up with right now as we go through the slides ahead we'll just return it to example A, B, C and then when we're talking about parameters we'll be able to say okay so in the example A this matrix would represent this in the example B it would represent that. So who has a thought or a suggestion on what would be like a cool system to bring into this mixture? So again, one of the examples will be a person and people can suggest what they think that person's like story or situation should be kind of like an improv scene and then also if people wanna provide an unconventional system that'd be good too. So any thoughts on that? Alex? I think maybe we can consider next system scale and see on team behavior. Great. So try to define these concepts in the meaning of team communication inside and outside communication with different subsystem of organization. Thanks for suggesting that. So let's do example A is going to be an individual human. So that will be hopefully grounding us to our lived experience and to realistic situations that people and comic book characters find themselves in. The second example B is going to be team communication. So we'll kind of develop the story, but we'll see. Maybe the team is a startup team or maybe they're a research team or maybe it's just an affinity group about some other topic. And then the third example, let's think about something more organizational and institutional. So let's think about that third example being potentially like a large company, large enough to have a legal department and an HR department, let's just say. So we have three examples which is gonna be the person, the team and the institution slash organization, three levels of analysis. And then as we walk through these formalisms, let's see where might be like strengths, weaknesses, limitations, insights, et cetera. Steven. And I suppose you could apply it to all of those. This idea, you know, we have this vision and mission. I mean, that's more for an organization. The mission and the vision transfers down to a team which might be the purpose and the roles. And then you've got the kind of the character and the kind of, you know, meaningfulness for the individual. So you've got this temporal depth being sort of expressed across all of those levels of scale. Yeah, exactly. Like the examples in this case, they're not three disjoint examples. In fact, the individual could be on a team in an organization. So all three of these examples that we're gonna be talking about could be nested within each other or they could be disjoint. And so that really highlights why it's so important to have a multi-scale perspective. Because even when we're talking about the individual, we're gonna be thinking about the individual, it's the system of interest. It's what we're talking about. It's the interface we're talking about, the Markov blanket that we're talking about, but it's inherently embedded within a larger system. So we're not gonna forget the company when we're talking about the individual and vice versa. So it's always all the scales happening at once, but then depending on the system of interest or the Markov blanket or the level of course-graining that is being performed in that specific analysis, it will dive into the details as they present themselves. But that will be kind of fun. So here's figure one in the paper. And we went a little bit in the previous videos into some of these sub-components because this is like M4 and there's M123 as well that lead up to M4. But just to recap what each of these pieces are, O at the bottom is observation and S is the state that inference is being done upon. So this is where those examples come into play. So in the individual case, the state that something might be having, that an individual might be performing inference on, it could be, is it sunny or cloudy outside? And maybe that changes how you feel or it could be a hidden state like, is this person happy with me or sad with me? A is the mapping of how those hidden states relate to the observations. So whether at the individual level it's sunny outside and then the observations are perceived with certain probability. And then yeah, people just raise their hand if they wanna like jump onto another example or ask a question about how some of these variables might interact. I'll just walk through them with no example and then we'll go to the example pieces. B is how the states are inferred to change through time. So the B is like the probability that given that at state one, it was in such and such a state that at the next time point, it would be at a different state. Now, U is a piece that connects to B and this is the policy. And so the policy is something that influences how states are likely to transform into each other. The policy itself is downstream of the agent's decisions about which actions to engage in. It turns out that the way that the action selection is chosen at the level of analysis, whichever blanket that we're talking about, that is action selection guided by this set of parameters. The G is that expected free energy that's being minimized and that also covers up the C matrix which is like the preferences. The gamma is the precision or the uncertainty whether it's like one over or not. And basically this relates to how precise the agent believes they are in their estimate of which policies to be pursuing given this chain of things happening down here. A is the, okay, sorry about that. At each time point in this model because I wanna be general but I wanna be specific about what this model actually did. At each time point in this model, that's being captured by this top sort of pinkish area. This is the affective inference question where the affect of the agent is relating into how it simulates rolling out in this whole internal gray zone all these future time points. And then each model tick is going to be like moving from the left to the right. And at each time point in this model, the agent is going to be running out some possibilities for what could happen in the future. So Stephen, and then we'll think about the examples. Yes, sorry, I was gonna sort of bring in an example there. So maybe- Go ahead, go ahead. Yeah, well, one thing that's come up when I've been doing some work with this metaphors of movement and Andrew Austin who does that work, he talks a lot about how anxiety, he finds anxiety is often related to people having, trying to have elevated status like they try to have an elevated status. And it's, and it makes a lot of sense with this because it could also be like, you feel like you put yourself in an elevated status and you're always gonna be slightly uncertain unless maybe you're a king or someone about that status being maintained. So being put on a pedestal can be, so there could be something about Western society and status which is partly why we have this overarching anxiety because, and it would make sense in a way because there's always a sense of never being completely clear about whether your status is gonna be maintained. So I thought that's kind of interesting and this would sort of tie, would support that kind of observation. Yep, so status or various other sort of social approval metrics we can think of, it's like the water we swim in as social organisms. So just like a company might be wanting to stay above water financially, at least that department, a person has to be staying above water reputationally. We can think of it in that way. And so again, let's go to the examples and think about these different systems and what they might be performing inference on. Scott? Just gonna comment on the status issue that it feels like then you can adjust. So you're making serial adjustments, right? You're trying to minimize the free energy, minimize the difference between your expectation and the reality you're experiencing, as I understand it. So with that overall effective status, you can do what feels like compensating controls. So you could, if you feel like you're at risk, vis-a-vis the one presentation you're making of self, you can rake in other presentations of self to compliment or supplement or mitigate the impression of one of them because you have myriad effective projections that you're making simultaneously. So it feels like if you have an awareness of this, it can help you with that presentation of self question by understanding that you, okay, this effective status that I'm presenting is not getting me where I want to be inside. So let me leverage another one, which is contiguous or a neighboring affectate, I'll call it an affectation here, what I mean in a presentation of a Markov binding site. So it feels like it gives a strategy for hybridizing and synthesizing your presentation of your Markov blankets. Anyway, keep going. It's just, it's impressionable. Yes, okay. So let's go to the examples and think about how these three, potentially even overlapping systems are going to be carrying this out. And an individual at each time point, maybe let's just say this is every second or each minute, a team might be interested in organizing on the multi-day and the organization is going to be more explicitly thinking, okay, quarter to quarter, that's going to be our next time that we're going to do all this simulation. So it's not surprising that as things get bigger, they also get slower. So that's a common theme we're going to see. So the individuals affect is really where this emotional inspiration comes from. So there there's the closest mapping between affect and emotion. But let's think about that team and the large company example. Well, affect again and valence specifically is like that plus to minus axis. So is this team going well? Is this company going well? And if someone says, yeah, we're burning money at an incredible rate, it's unsustainable, but it's going well because I have a believed trajectory that we're going to go on where we actually pull out of the nosedive and do XYZ. So they can still think it's going perfectly well, even if the instantaneous rate of loss is high. And then on the flip side, you can imagine a team or a large company where instantaneously their state looks like it's good. It looks like they have a lot of cash or it looks like they have a lot of people involved in the team, but they still feel negative about it. So that disjoint between the affect about a situation and the state of the situations, one thing to take into account. Now let's connect the affect at all these levels to precision estimates at different levels, especially in the active inference framework, which frames what organisms or any system is doing as precision seeking, high anxiety is going to be coming from having low precision. Now it's not that we want zero uncertainty, we want to have this controlled novelty sort of you curve, but it can be clearly situations where your precision is too low and then you're surprised by what is being admitted to you in the niche, whether it's your team operating environment or the company's operating environment. And then if your precision, it can't get too high because if you're overly precise and you're incorrect, then you're going to die. Like if you have a super, super accurate understanding but you're just delusional, then you're going to fall off the cliff. So in a way, what shapes this precision is actually the real world, which prunes beliefs that are not functionally associated with success in the real world. Stephen, and then we'll come back to it. Yeah, I really like this idea of thinking about how we understand ourselves within all of this. So for instance, yeah, if you could move between a different regime of attention or sense of self, which could be like what happens when we do sense making, maybe even change the operating metaphors of the person, who am I at the moment? What's the context? What's my situational attunement? What's my team's purpose? What's the operating metaphor for this team? Are we like all driving together in a car? Or are we building ourselves a fortress? And that would, that almost could give a way to jump in and out of these kind of affective engines, if that makes sense. So as you're seeing over time, how's it going, what's the policy? And what's the affective states that I sort of thought about in a certain future and they could almost then say, okay, what if I shifted? Maybe that's where you'd need the body, like the metaphor I'm working with. And then what does that do? And maybe at times that's when you need to work with a group to be able to do that, because maybe an individual can't just keep flipping backwards and forwards. You need to work as a group to do that kind of sense making across different times. Cool, Scott? I had a question for Sasha. You were mentioning pruning, but you weren't talking about dendritic pruning. But I wonder if dendritic pruning is describable by what the type of pruning you were just talking about, because my understanding of dendritic pruning, very minimal, is that during the day, you have the growth of dendrites and then there's prune back selectively. And that is influenced by your externality each day. The growth is influenced by your externality. And then there's this pruning process. That feels like it's iterating itself at that level. Is that describable by this here? Is that another scale? Or is it something else that's going on there? Yeah, I think that's a really interesting analogy that that's kind of what's happening there. I don't know if I would kind of draw all the same parallels just from a mechanistic standpoint, but it's true that kind of in the generalities that as connections are formed during the day, then they're pruned or lost as they're not used. And yeah, we think that happens at night. But yeah, I think that's an interesting analogy to draw that it's beliefs about the world that aren't, I don't know, useful or supported are the ones that then get pruned. And I was also gonna say the other aspect is, I think what we know from our lived experiences that it's really interesting when people can be holding certain beliefs and not know that. And then when they finally kind of confront the belief, there's a bit of a breakdown. And to me, that moment is very interesting when people have to confront certain beliefs that maybe they didn't even recognize that they held. So there's a phase change and a recontextualization piece in the last part you said, Sasha, and the pruning is using a network analogy. So it's like pruning connections between different aspects of the system. And so let's think about the individual in the office having a conversation with somebody. The query, the prompt to them, let's just say is how are you feeling? And that is a query that's presented to the interface, which is their ears and also their vision and other aspects. The query is presented to the interface about what's inside of the interface. And now what's happening inside of the interface is a generative model of that query presenting system. So inside of the person, there's a generative model of, for example, the person who's talking to them asking that question. And we know it's a generative model because if they covered up their eyes or if they only heard part of it, they might be able to still recover what was being asked. But in either case, there's a generative model inside of the interface and a prompt is being presented as an observation stimuli to the interface that's querying how the system will respond. So speech is action. And that's not even a legal claim. It's that it's a motor claim. Speech is generated by a motor behavior. It's not your hand, but it is a motor behavior. And for some people it is with their hand. So various body parts can be used to communicate. Let's think about that interface in the process of the team and the company. So potentially for these organizational entities, the interface could be like a consultant who interfaces with specific questions. But instead of a counselor who asks, how are you feeling to an individual? There might be questions that could be asked to the team. And there could be questions that could be asked such that only the team could respond. And someone might say, well, how could you ask a question that only a team could respond to? Aren't teams composed of people? There's two things I'd say there. The first is that that's the reductionist argument. The reductionist slide is like, what do you mean get a response from a person? Aren't you just gonna get a response from skin cells and neurons? It's like, right, that's what I'm trying to get that response. So even if the response can be reductionistically understood as being composed of smaller pieces, that's not a surprise. That should be taken for default. So questions can be asked to teams that of course humans are gonna be responding to as well. But also we can think about kind of like colony phenotypes. We can think about team phenotypes. So for example, let's just say I ask a question to a team, like what did you have for lunch or what do you think the most exciting project is for next year? And then I looked at the distribution of how long it took people to respond to that message. That's like a colony level trait. It's a collective trait. That distribution is composed of people acting, but no individual can dictate what that distributional parameter is. So it's like a measurement of, let's just say, some blood cells. You flow a million blood cells through and then you can get a distribution across all of those blood cells. So it's like you're querying in the aggregate even though each individual sub piece could have been following that distribution or somewhat of an outlier. So those are the kinds of things that you could ask presenting to a team or an organization. And the team or the organization is gonna have a generative model of its niche. So if it's involved in a certain type of consultancy where it paid a lot of money and it really respects what the person is gonna be doing, then maybe when you ask them the question, what's the most exciting project and why they give one response or what's the most challenging project of this year and why was it challenging? Whereas if you were in a different context, you'd give a different response because the generative model of the system across that interface is very different. So the response that it gives will be different and it's gonna be querying different things and externalizing different aspects of itself. Could be accurate, could be inaccurate. Scott. So one of the things that raises is I've been in some of the work that I've done in cybersecurity, I've been parsing principles versus ethics versus norms. And so it's kind of interesting in querying a group, you're gonna get an average, right? You'll get a Gaussian distribution or whatever of opinion and then it'll be informative. And so norms I assert for these purposes that their behaviors, behavioral expectations that are set through the past what was normal. And one of the assertions I make is principles and again, their assertions is different overlap here. The principles are institutional assertions of aspiration and ethics also are aspirational but they're human assertions about human behaviors. That's the distinction I make, whatever words you wanna use. And so it's interesting in the context where you're talking about about querying a group because there's different evidentiary group chunks that you'll look at, right? Normative you'll be looking according to those definitions on hey, what we do before and what we do against the set of standards that set. The principles may be more aspirational and it may be how are we doing towards that thing but it's not necessarily based on what was done before. You can say our principles are more aspirational than what we did before. So it's interesting just that interface of what's used in the discussion of the group dynamics and how that relates to what you were just describing which is the querying of the group and individuals. The framing of the nature of that output you're getting from the group is kind of interesting based on that question asked, right? What do you do before is one question. What do you wish to do in the future is a different question. Yes, so that query could be what do you regularly do? So again, it could be the person in the office and someone says, what is your morning routine? That's something we talked about in a previous week or it could be a query to a team. What is your onboarding routine or how do you welcome new members? And then this question of the aspirational and the norms. So it's funny because norms can mean what's normal. It means what's expected. That's what has happened in the past because what's not normal, if not the past, what's normal? But at the same time, normative means what should be. So a normative claim should be like, I think people should be treated this way. And that may or may not reflect what has been the norm. So people, all of belief systems make normative claims that sometimes are consistent with norms and sometimes are almost radically divergent from norms. Someone says the norm is A, but I think the norm should be B. So I'm making a normative claim about the norms and it should be different. And so these are all interesting things to, yeah, Scott. So one big gap that's out there in the world right now is the gap between law and the interactions that technology allows. And I'm talking about practically up and down the chain, reproductive interactions, biomedical, population-wide, all these different things. The law is really lagging. So one of the things that's interesting about this is there a possibility that this kind of analysis can help us to discover norms that would guide groups. And those norms, if they're sufficiently interesting to the groups can be made enforceable into laws. And so that, to me, this is a very interesting way of a possibility of discovery of new laws and self-constraints that groups can put on themselves in order to function as larger systems in a globe kind of context. But I'll put that aside for now. But I just wanted to raise that. Is this a discovery? Is this a mechanism for discovery of new working models of larger scale norms? Yes, Stephen. Yeah, I think that this is a good point because, and discovering that with this model here where you've got this task-specific and then you've got this effective, actually as a larger scale, timescale sort of checking. I think it's also relevant in terms of how we can get mechanized in our, like if we just keep doing the same task and we get the reward and the cue and it starts to create a high sense of, this false sense of certainty in our world, which is so much about just constructing more of our social niche and alienating or externalizing the environmental. So we can start to get caught up in this kind of task-specific ways of working and silos. And so there may be this could speak to why we need to do some sort of sense breaking to sort of take us out of some of these mechanizations and look at a longer timescale and use affect to do that integration and not just rely on metrics. Cool. Let's take Scott's point about this being not just an enforcement understanding, but a discovery process and connect that to some topics we've been considering like bottom up versus top down function of systems. So the bottom up is learning. That's where the little ants are going out and they're finding different things or each of the cells in the retina is receiving a different photon or each person is receiving a different newsfeed. And then as it goes quote up or in the system, it's being aggregated and that is updating progressively more and more summary like variables like of teams or of organizations. And then norms can feed back down to lower levels of the system, whether in the brain like the Bayesian brain or the predictive processing hypothesis or whether it's team communication norms and those can be informal or formal. When we are talking about human organizations, we gain this level of syntactic framing of top down priors. So instead of just saying this is how things are like the red blood cells are flowing at this speed. So that's the speed they're flowing at. We're gonna say there's a speed limit. And if you don't follow the speed limit, then we're gonna have this type of enforcement mechanism. And so it could be the case that in the legal context that if you just apply a law, a top down law, it's kind of like trying to teach someone a motor pattern. Let's just say a speech law of some type of controlling speech. It's like controlling someone's motor behavior again because it is a motor behavior. And so what would you do? Would you put them in the exoskeleton and say, okay, here's a perfect golf swing. Now you're a golfer, right? Well, no, you actually need to practice and you need to have feedback. And so if the law is overly punitive and it says, oh, that one time that you got put in the exoskeleton, you didn't understand how to golf and other sports that golfers know, how could you not have made the connection there from that one time that I punished you for messing up? That is not an extremely helpful framework for certain issues. So how could we have this bi-directional conversation where there's experimentation and implementation at the edges? And then people's experience percolates upwards in the system and then the system informally, but eventually formally ends up entrenching healthy or productive or agreed upon patterns. So it's an interesting thought about how these type of systems are not just about learning, they're not just about exploitation as we've been thinking about, they're sort of in both domains and they can trade off. And when they need to exploit, they can dip into exploitative type behavior, but also they can be in an exploratory mode. Scott? And it's also so much wrapped up in identity. It's amazing how this is a new platform for identity because everything has a Markov blanket and it has attributes. And external attributes that are presented. And so the discovery, when you have things like trade associations, markets, communities of interest where things, entities with similar, I guess Markov blankets and attributes can cluster together, forming those systems which have larger scale, which enable de-risking and leverage for the organism or organization at larger levels. So the discovery process of bringing these markets are where Markov blankets are made similar, right? Because you go in and there's a law, it says if you're going to see a flea market, your table can only be five by five. So if you go in with a seven by five table, you can't go to that flea market. You have to follow the rules to be part of the club. But if you're in the club, you get the power of having more people walking past your table to sell stuff. So there's the interface, the law for the vendor to the flea market. And then within that transaction, there's a sub-transaction, which is if you want this little pocket knife, you're gonna give me $3. And then I'll give you the pocket knife. So these are the nesting of agreements in the context of identity and enforced and unenforced norms. Blue and then Steven. So just to kind of tie into what both you and Scott were saying. So like, you know, if we're ants and we're going through the world, like building this generative model, like as we discover, as we're learning, right? Like, so as we learn, we build the generative model. Like to what extent is our generative model tied into our identity? Like that's, it's curious to think about. And I mean, I know that like there's a policy update and perhaps the policy is more based on identity. Like we talked about earlier, like the running every day, maybe it's a policy thing that's identity. And so that we pick the policy that then minimizes the free energy. But I just wonder like, you know, the generative model, how frequently that that builds upon the identity, like what the relationship is between those two. Okay. And then Steven. Yeah. Well, one thing that sort of ties into that question of identity and this type of model is, and the Markov blanket, I suppose, is how much changes based on the processing, the processing which could be like this in the brain, could be sort of structured through the way the brain is structured. So there could be an additional type of structuring going on in which all of this is able to work together. And then, so how much is this kind of the observations and the states in the sort of brain computating in relation and with the body and how much is it just literally the way the Markov blanket itself is changed, you know? And I think that's maybe an open question at the moment because the question, I don't think as much is the idea of regimes of attention has been talked about, but literally how would that change your computation by bringing in different parts of the body and the relation to the environment as the sort of Markov blanket to integrate information coming in and how much is through computation. And I sense that at certain times, maybe when you're learning something for the first time, it's very computationally heavy, but then as you draw things down and it becomes kind of tacit knowledge, we might become subconsciously able to flow and shift. Yes, formal systems can become integrated through muscle memory. So somebody who's learning how to play violin, they don't need to know the math of the frequency overlap. They might learn that in their journey, but also it becomes incorporated through culturally acquired practices and muscle memory. And that can be carried out into other areas. Scott? It was an article just read recently in Nature or Science which relates to Stephen's last point. And it was about how in cultures where there were deaf culture, so there are isolated populations, I think there was one in Central America, there was one in the Middle East of deaf people because it was some kind of genetic thing. And so large populations of isolated people who were deaf. And what they found is when they came up with their independent sign languages, they were all independently generated. What they found in this research is they used the body in each of the isolated populations. The body was used in the same way, different parts like first thing, I'm gonna make up the parts because I can't remember but they would first use the face parts, then they would use the left arm, then they would use the right arm, then they would use the core of the body. The morphology of speech through sign language was identical across the human populations even though they were isolated from each other in terms of what they added on. I forget what it was with the bigger concepts or the things that were emotional, there were different morphologies of that situating their expression in their morphology, in other parts of their morphology. So I thought it would relate to that. I'm not sure how, but it feels like that form of expression being uniform across humans, it's like the theories of humans having a language capacity. I forget who that is, maybe McLuhan or somebody. Anyway, but it's interesting that the morphology of the body, that sounds like there's some intrinsic tiering of what expression comes through what parts of the body. I'll try to find the article that. Very interesting and that actually reminds me of William Blake's poetry and the four-fold Albion which is a metaphor for cosmic persona as well as all of England as well as the individual body. And so it's a metaphor that he returns to many times in an allegorical context and it's so evocative because it speaks to a multi-scale system and it is of course applying to multi-scale systems. Steven? Yeah, thanks for that, Scott, because I think that's, I mean, that could actually also, there's all this thing about this innate language capacity which has now been kind of discounted. But I mean, if actions first and the morphology is maybe more related, certainly within certain, environmental niches that people have grown up in, that would maybe explain why there was that confusion in the language world. Because people sort of saw a language in the universe, well, actually if the morphologies where it's at and the language is just another type of morphology that's kind of building on the core morphology of the body, that's where that patterning was coming from. Not the language itself or the language in process. I was just gonna say extending that out further, one of the assertions that I've started to make and believe is that the mind doesn't reside in the brain at all. The mind resides in language and in material culture and the brain is an antenna that we tune into the local mind wherever we're exposed to. That's why colonialists put indigenous populations, children in their schools, right? They take, they want to tune their mind into a colonialist culture early because they then renders the externality innocuous for their colonialism. Anyway. Yes, that's true of all children's education, including contemporary. And to bring in this key point of precision, again, active inference is not about systems finding the most rewarding state and then sticking with it. It's about precision. So to connect that to identity, the child's education is being, it's basically being given symbols that will help it navigate the regularities in its niche. And again, there can be dysfunctional education that leads to children that are incapable of taking advantage or of working with the regularities in the niche. But this is where the policy selection piece comes into play to kind of return it to the figure because that's like important to return to what the paper said is the policy selection piece is so tied up with identity. It's not that there's an identity like who we are and then there's this totally separate question of what we do. It actually is we are how we act and to even take it to another level, the understanding of how the world changes, S, the states that are external that we care about, the understanding of how those states change is related to the policy actions that the system takes. And so that even brings this relational mode another level deeper. It's not just like, say Pierre Worf, oh, well, the words that you use change how you think or the influence how you think. It's everything is influential on how we think because how we think is like this emergent outcome from all those factors. So it's a starting point to say that the words that we have or the affordances or the sensory capacities that we have shape our understanding of the niche. That's the starting point. Blu? So just to piggyback on the idea of a team and identity like how and also where the mind is like where is the mind of a team, right? Like when you're thinking about this in terms of active inference. And so the individual team members, I mean, we've all worked on teams before at various points and it's amazing how the team changes based on the like the function of its individual components, right? Like so there are different people that bring different dynamics to a team. Some people may be louder, stronger or less influential or everyone has their different kind of role in a team. And these all like really bring changes to the overall structure and the way that the team will react, right? Like based on a query, like when you query a team, you know, what is the best project of the year that'll change based on the individual components of that team. Yep, and the team organism mapping, let's think about team onboarding and the way that the interface of the organism is like an interface of a team. So the immune system of a person, there's times where something is onboarded but in a very controlled way and it does not trigger the immune system. So for example, if you eat something in the stomach which is a specialized chamber that digests food, it is then absorbed through the interface of the gut in a controllable way, in a way where only the safe pieces, like you know that the sugars are coming in and the amino acids are coming in, lipids are coming in. So through that controlled and evolutionarily selected upon interface, there is an onboarding process that's healthy. But if there's a onboarding event on your body like a cut in your skin, it's going to trigger the immune system. So that's more like a security breach for a team, whereas eating for a team might be like, hey, we're going to get this ingest of new data. Our colleague is going to send us a data set. It's going to come from this email address and it's this type. If it's from a different email address, I'm not going to see it. Or if it's of a different type, we're going to need to change our pipeline. But if it's this type of food coming at this interface since you know this structure, we're going to be able to digest that and then use the information latent in that input to help our team become more structured or to expand just like an organism would break down the food and use it to grow itself or to operate. Steven? And also with teams, you've got the teleology of the team, like the purpose, the goals. And I think that that, I mean, this figure is sort of speaking a lot to task-based ways of things being evaluated with rewards in a way that we structure our teams often with the way that we then construct the environment to sort of lead us towards those rewards in a way. And I'm really interested, I'm looking at this thing called infusion space that I've been working on is when we are in a kind of an organized space with a teleology and a direction like a team and organization. And then when we're in a sort of community space which maybe this isn't, I think is another process of affect where we're sort of attuning to that kind of intersubjective dynamic between us. And there's ones more efficient, the teams more efficient, if your goal is the right goal. The danger is that you could be, they talk about now with the modern world is we're getting very good at doing the wrong things. So it can be the trap you fall into. So climate change being a classic case, we're very good doing things that we shouldn't be doing. So I think there's an interesting question there about a teleological dynamic aligned to goals and rewards which everything's then and it could actually remove the knowing of your affect. And you can just ignore it because you're getting other kicks. And when the deeper knowing is like, you know it's not really gonna be good when you talk about really long time scales. Okay, let's expand on that. So teleology is coming from the word telios, which is the end. So teleology is the study in philosophy of what systems are as defined by their ends. So the purpose of something like houses are to keep people warm and what are people for? But instead of engaging in the philosophy of teleology like what is it mean to be a human? What are we here for? We can think about it from a cybernetics perspective which is also equivalent to like an intentional stance in philosophy. And that's kind of like saying like, okay, look whether or not we're ever gonna figure out the ultimate telios of a human or the ultimate telios of our team at the very least as a goal seeking system that has to operate amidst uncertainty and communicate through limited and noisy channels we need to have clarity on our mission, our local telios so that we can have clarity on action. And again, if your local mission is something that is a pursuable, then it's a second layer whether it's quote, good or bad. And that's the whole question about what's got raised with the Markov blankets of legal entities. And there it's kind of easy to see some of the analogies between let's say a society and a person. If each cell were to fight for its Markov blanket then things could get hairy pretty fast. Whereas over evolutionary time the interfaces between cells of the organism have evolved through so many different proteins and so many different mechanisms to work well together to be compatible within the organism. And similarly, within a healthy operating system for legal entities you could imagine there would be a healthy interfaces but in the absence of a healthy interface then even entities whose interests are completely aligned might end up in extremely adversarial contexts together. Scott? And I think that might be the problem with Kant's, Immanuel Kant's, what is it called? The something imperative? Catechloric imperative? Yes. And is that the notion that if a human acts a good human acts in a good way it'll be good for society? So the Kantian imperative as it's called a categorical imperative sometimes. I just wanted to look up an exact word. It is quoted as a rule of conduct that is unconditional or absolute for all agents the validity of claim of which does not depend on any desire or end. That, yeah, that reads heavily as that classic era of abstraction in philosophy. And that's one of the problems in European law right now and identity law like GDPR and those kind of things in the real world their idea of entities and of identity is based on Hegel and Kant. And those are, there is a real focus on that individual being good and that being enough to have the entire society work well. And I think that's part of the prop challenges over there. That in continental law you make the law and you refer back to the statute all the time in common law, which is any place that was part of the British empire. What you do is you come up with a half-assed version of the law and then you allow common law to fix it up. So the common law acts as a continually dynamic revisiting because the courts then make the law and then when you have a later case you don't just look at the original statute you look at the cases as well, the other court cases. And so it's been modified by that. So one of the things that this gets tangled up in is philosophically a question of how much of a teleology you have. I believe my personal belief is that because they had royalty a tradition of royalty on the European continent that they're more amenable to a single voice. And so that single voice can act in a misdirected way for a longer period of time because it projects onto a paradigm onto the population. But it's just something that it gets softer at that level perhaps because we don't have the data and the metrics but it feels like those same kind of things are iterating from a system perspective if you have something that's too stuck in the mud it's not gonna change as fast. And now that's happening everywhere since technology Moore's law increases interactions exponentially all of our institutions, all of them were intended to de-risk and leverage much more leisurely institutional contexts. So they're no longer fit for function in the interaction landscapes we have. And so one of the questions is does in what way might this analysis help us to not just improve old institutions that might not be possible but construct institutions that can better be dynamic with the increased frequency and volume of interactions we have which is perceived by any one institution as an increase in density interaction density. That's why your calendar is all jammed up because we're still using calendars the way we used to use them which is linear progression of things they have to do things like that. So anyway, sorry about the question there. Yeah, interesting. Steven. Well, that might now that also might tie into this idea of when we have systems in our niche that we've created when, you know, when do we need them to be more tunable to our own active inference types of dynamics? Like if we, if this is the type of way that we can you know, what would help us be better at attuning and what types of attention at what types of scale would we want this kind of dynamical process and when does it wanna be fixed? And it kind of speaks another problem in the development world was the rights-based approach to development. So someone somewhere defines what people's rights are and then it gets dictated down but that causes a lot of problems when you roll out into cultures which value things slightly different to those that decided what's right. And it's also what's right for someone in a poor context or a poverty context might not be quite, it is with someone in a developed wealthy context. So the same, you know, maybe this is a question that needs to be asked is when do we need to have this kind of active inference, ergodic help, you know, thought about and when do we say, look, this isn't being done that way. We're just gonna bolt it down and use a non-ergodic kind of structured state of the sort of laws. Okay, interesting. So let's try to connect that to a conch and then back to action because that's the key piece. And that's what we really wanna emphasize with the active inference approach is that although we generalize, we're generalizing to reduce our uncertainty about observations, which could be letters or sounds that we're hearing. We're reducing our uncertainty about the stimuli that we're as people receiving by generalizing so that we can guide action. And so the categorical imperative on Wikipedia, it says it's best known from its first formulation. Act only according to that maxim, whereby you can at the same time, will that it should become a universal law. So that's actually also related in a sense, like a symmetry relationship with the Rawlsian theories of justice. Like what would be fair would be if you were to be a random draw, you'd wanna be in that society versus another society. But here's where active inference and action and the reality of the finite world come into play. Different people have different risk preference trade-offs. So maybe somebody says, yeah, I'll take a one in a million chance of being the dictator. And somebody else says, no, I'd rather have in a hundred percent chance of being in this other world where everyone has the exact same amount of official power, for example. But even at a deeper level, that's a thought discussion and it's becoming increasingly lifted from action and from policy selections that we as the agents perceiving that question are being faced with. And so when people bring in the things like it should become a universal law, that's a normative claim about a universalism, about a law. And who's the judge, jury, executioner? So although the sentence, the imperative started with act according to dot, dot, dot, dot, dot, it ended up in the clouds with some speculation about what could or would or should become universal and ostensibly enforced by other people with weapons, but not yourself. So how do we tie it back to the real system and confront ourselves with what we actually are being faced with from an affordance and from a challenges perspective and then ask what policies will help guide the whole system towards states that we want to see? But at first we can just acquire precision about who are we? What are our relationships? What are our interfaces? How can we think in a more rigorous multi-scale way about how we are composed of our relationships? So not that I'm some Adam and I'm being influenced by my relationships, but it's multi-scale relationships all the way up and down. So what is really happening here? These are some interesting ways that sidestep morality when working together, but they sidestep the ways that morality has been framed in terms of, well, I think it should become a universal law that I should get all the money. And then it's easy for me to act with a categorical imperative under that understanding. And someone says, well, no, that's not the real understanding. And then you're right back to square zero. So how are we going to use systems thinking but also include aspects of intercultural communication and understanding that people are gonna be having different experiences of the same deal, the same interface, different people are gonna have different experiences of it and then work within that environment. So that's kind of some big areas that hopefully we'll be thinking about in the coming years because there are serious questions about how to interface formal systems with people. Scott? One of the things that's interesting right now in law is that it's really showing wear and tear. And people are really focusing on liability and rights and things like that, which is terrific. But one of the things, the right is an expectation. And one of the law does is it constructs a bunch of duties around that right that give life to the right. So if I say I have a right to free speech and nobody has a duty to respect it, then I don't have anything except words on paper. And so the duties, the construction of duties, what that does is it's constraint on other expression, other Markov blankets, right? You're limiting and constraining the environment of other individuals you'll meet because they have a duty to do X, the duty to, there's a judge who said years ago in the United States your right to swing your arm in public ends where my nose begins, right? So you can keep doing this until my nose is next to where you're swinging your arm. So it's interesting, isn't it? They have this, that is really that dynamic element if a nose is introduced into that system you got to stop swinging your arm, right? And so the anticipation of situations where you shouldn't swing your arm don't swing your arm in an elevator. You know that kind of thing, right? Don't spit on the subway, right? You start to have these rules which are like, hey, we can generalize that duty to a population. And if it's relatively innocuous like red means stop, purple could mean stop. It's not the light that stops the car, it's the behavior elicited by the signal. So there are a lot of innocuous things we can avoid avoidable harms by agreeing on innocuous signaling for certain duties and behaviors that would render those rights to be realized and to be living. Yep, so with this kind of eating team metaphor ingestion of stimuli or a food at the organism level and that's being processed either chemically with food or informationally with a stimuli in order to guide action. So we can think about these teams and the organizations as getting inputs and then engaging in action in their niche and the goal of the inputs and selection is going to shape this is so that the inputs can be processed in a way that leads to outputs that are effective. And so again, another mapping would be to a heat system. So like an engine, if you can convert 100% of the potential energy into work, it's a very efficient system and it doesn't lose a lot to heat. Whereas if you have a system that's broken or has inefficiencies in it, it will perform very little work in its task specific role and it will release a lot as heat which is like wasted energy. That's like data exhaust. There's wasted data. There's so many mappings here. Actually, in the last couple of minutes, let's just run through the slide, see if anything comes to mind. But other than that, I'm sure we could keep on thinking about this. Figure two of the paper, the authors showed this rollout game and the agent at each time point is imagining what could happen on future timelines. And so here's a timeline where I stay in this neutral state on the left side. Here's a timeline where I get to state three and that's good and then I go to state four and it's bad. And so as you simulate and you roll out more and more and more deeply, that's the sophisticated or the deep component. As you roll that out, in order to track what kind of a world you're in, are you in a world where all the routes are bad or all them are good or some are good or bad? The agent is tracking a few things. It's tracking the state estimates like the expected value of different policy choices but also the variance or the uncertainty. And what we saw in this paper was that as the agent starts initially thinking and simulating deeply, it starts to be perceiving itself in states that are rewarding and it has relatively high precision about being able to maintain itself in those states. But as it continues to simulate deeper and deeper, the precision of its model drops. This is associated with anxiety from the computational psychiatry perspective. And it's also associated with the increased percentage of the events being negative but not a 10x increase in the events being negative. And as we pointed out last time, it's actually when the threat perception is at a medium level, when the anxiety skyrockets. And again, this specific trace is just related to the parameters of this model. So it's not meant to be a general claim about anxiety or something like that. It is for future work to really map this onto real world systems. We had the formalisms, which people can pause and look at. The formalisms of sophisticated inference and the real natural language translations of some of these variables that are interacting in these papers. We talked about the generalized free energy and about how that formalism has a few different parts. One of those parts is conditioned on policies and it talks about the relational mapping between observations and states. And then the other function here is conditioned on policy and it's, yeah. Sorry, let me just rephrase that. One of these sides is conditioned on policy and relates to states. The other side is conditioned on observations and states and it relates to policy selection. We talked about a few other topics, intrinsic and external value, general and special cases and how when you are moving through a prediction in this branching way, there's an exponentially increasing number of states. And so in order to deal with that, organisms or systems have to engage in a little bit more of a guided kind of search. And that brings us to this slide, which is what I kind of wanted to pause on because I think blue, this is what made you say we have to invite Scott. So what was interesting to you here? What made you think of that? Just Scott and how Scott's always presenting on risk. And this just was like, well, I think that Scott really needs to be here. And based on also the fact that Scott said he'd been studying active inference since the October complexity weekend. So those two things kind of nailed it for me. I feel like it's this giant thing, a smorgasbord that's in the next room that I haven't yet entered into. It's so delightful. I'm so thrilled by my ignorance in this. This is the best gift I could ever get. So thank you all. You know, it's like a room to room party. You just go room to room and it's always a new setting, but we're bringing the party along because it's enacted. It's not that there's a room of researchers or a community that's waiting for us to be onboarded onto. We're co-domesticating each other and creating the community that we wanna see. So that's self-organization and multi-scale organization in action. And we can think about that actually tying it back to this figure. So the active inference framework under special cases which were described in this slide. We can think about a few different cases where certain things are driven to zero. So when there's no uncertainty about states, in other words, states of the world can be perfectly known, then risk-sensitive policies can be enacted. So when there's no ambiguity about states, you can talk about risk. That's this part right here. However, when there is ambiguity about states, you have to reduce that uncertainty. And that's sort of like, if you don't know what the states of the system are, you can't operate. But to the extent you have reduced ambiguity about the observations in states, you're able to increasingly plan risk-sensitive policy. So Scott, that makes me think about in a world of digital transactions where we can have reduced uncertainty about certain types of events happening, could we then move some of that energy and focus to risk-aware policy? So that's one piece here. That's why I call what we have right now a self-boiling ocean. Nice. So that's the saddle meme. This is still within the context of how most people think about ambiguity and uncertainty, VUCA. That's in the terms of minimization of two different things. Minimize uncertainty, minimize risk. First, go ahead and reduce your uncertainty about the states. And then to whatever extent you can do that, reduce our uncertainty about the future, vis-a-vis reducing risk. That's the normal framework for most people, or at least it's one way to think about it. But now, let's think about it, active inference, not just in a risk-controlled way, but in this Bayesian statistical framework, a variational framework, where we're actually doing not just a double minimization, but we're doing like a push-pull with minimization and an exploration simultaneously. So we're gonna be able to dynamically adjust, explore, exploit. It's not exactly explore, exploit as we talked about, but it's like explore, exploit, because we can dynamically adjust the relative pressure or weighting of the in and the out. So we're always exploring and we're also always reducing our uncertainty about the system. And so that's why the two arrows are moved into an overlapping area, just graphically, this is just a loose graphical model here, because it's not a disjoint two-step process, ambiguity about states, and then ambiguity about risk if we have certainty about states. This is the co-joint estimation of all these pieces together in a way that's a push-pull. So there's a couple paradigm-changing ways of framing risk and operating in this mode. And that will lead to different types of institutional structures that'll be distributed which will resemble neighborhood watch, but it won't just be neighborhood watch, it'll be neighborhood creation as well. That's a fundamental change. Exactly, instead of just putting up the law at like a tripwire and then triggering it, you could say, how do we just have people who are going on a neighborhood walk just people walking in the neighborhood? It's like, you know, like they could, just having fun and being social, could that be the actual thing? And so it fuses several of the previous ways of thinking about it. Well, we're gonna reduce our uncertainty about what's happening in the neighborhood, and then we're gonna do a risk minimization policy. Well, this makes you think about installing surveillance cameras. But what if you said, we're gonna jointly be discovering what community norms we have and what we want to be aspirationally as a neighborhood around safety or around communication. And maybe there's a way where we can just go on walks in the afternoon and all of a sudden people are connecting and it's a safer world. And the other, just one other thing, I once read an article about the world champion fly tie tire and he tied flies and they were the most successful flies for catching fish. But all the people complained that his flies didn't look like actual insects. They didn't fool humans. And he said, I don't have to fool the humans. I just have to fool the fish. And so the reason I say that here is the way we're gonna do this is we're gonna use a left side. We're gonna get people in by saying, hey, you got risk? What have you been doing about the risk? Let's talk about the risk. Okay, that's fine. We're gonna get other people in the room who are gonna be talking about the same risk, but people never met each other before. And then by the time they leave the room, they're gonna be in the right hand side of the design. That's how the governance is gonna shift. We're gonna hook them with something they recognize like the fish guy. We're gonna hook them with the thing on the left, which is, hey, risk, we're gonna talk about risk. That's what the Atlas of Risk. I'll send it around to the other folks, but some of you have it. It's 870 different information risks that are shared by people now because digital stuff has made everybody have trouble. So they come for the risks they know, but by the time they leave, they're gonna be organized into different groups that aren't gonna be dealing with the risk. The existing risk description was just a way to get them in the room. Yes, yes. So one metaphor before, Steven, instead of let's measure your blood, measure your blood, measure your blood so that we can measure your risk of the disease. This is like, how could we be doing active inference on health and working towards health in a way that, yeah, maybe we'll do a measurement here or there, and we do wanna reduce risk and we will reduce risk, but we're actually framing it in a way that's like a journey related to health in mind rather than ambiguity, risk, disease, negative, negative, negative, uncertainty, anxiety. How about Yin and Yang with a little bit of a more balanced understanding about how we can function and enact policy? Steven? This is a very important diagram actually. Like you're saying, this takes us away from just trying to reduce uncertainty about the external stimuli and the kind of locked in risk aversion. You're actually working more with that time dependent or time series approach that active inference gives of like, well, how are things changing and how just like that fish looking at the fly, how does that embodied agent and the wisdom and recognizing that there's a wisdom about how to be in the world and how to be with your niche, how is that actually being used to inform how we make things happen and not just this more mechanical approach? Yes, and good points there. In active inference stream 19 yonder in mid-April, we're gonna have deeply felt affect, which is one of the papers that was cited in this paper, but we're gonna go into it in more detail. We'll talk about perception, about anticipation, action, metacognition, implicit metacognition and just to close out, I listed a few questions. Do people wanna hang out for a few more minutes and talk for a little bit? Yeah, it sounds good, but I'll just give the closing slide and then I'll return to the question slide. Thanks everyone for participating. The live viewers have a link to a feedback form in the event. Everyone else really thanks so much for 2020, super special and unique experience for all of us. So really deeply appreciated for those watching live or in replay or whenever, wherever, it was great times. So thanks a lot and we're just getting started. So for a little bit longer, whoever wants to stay and if you wanna leave just know where it's just drop off but what's something that somebody wants to go into here or what's something that still people wanna bring up. One of the things I'm wondering about is just the, we've been taking this from a single individual perspective to some extent. And in those group perspectives, it's kind of, it's interesting that the simultaneity of all of it, that the, you know, you simultaneously have an individual, how do we get a handle, how do we help people get a handle on the many levels that operate simultaneously? And then one of the, then how do we bring that to the current time, the now where everyone is acting in this moment, you know, reading another paper now in fractal time. And they said, hey, the past and the future don't exist. If they do, where are they? Show me, right? So the idea is now is all that exists. So we have these priors, which were that and we have these future projections, which are that, but we have the now and we all act in the now and all these myriad dynamics feed into the now. And so it's such an interesting notion to me of how to trim tab systems as a lawyer, you don't just, we don't just observe, we're always trying to push things in one direction or another for clients, right? Or for whatever. And so it feels like it's such an interesting thing that we have, it's not just a now, we have a series of nows available to us, I guess, because we, in the diagram, you see a series of nows, right? In the steps. So it's interesting to that temporal aspect is something that's just not brought into so many systems, right? They're so static in so many ways. And so in a sense, like when you work with standards people, they'll say, oh, it has to be robust so that we can plan based on it. But it also reflects it. And it feels like this naturally, this analysis naturally lends itself once it becomes an institutionalized thing because our institutions now are not built on these analyses, but once they're built on the dynamics where they're built on, I'll use the word uncertainty, but it's really embracing uncertainty. Or as I said in 2014, I called it entropy engines. We were actually mining disorder in order to create order. So anyway, just a couple of things that thoughts that came to mind. It feels like that time issue is relevant to our understanding how we can mine disorder. Cool, Stephen? Yeah, the time issue's definitely a big part. And we see that with these papers, we've seen that there's the narrative approaches, there's the reflective. And now they're trying to get into the pre-reflective more and traditionally that's been meditation. But what is the way that we understand these different timescales? And what I'd be interested in is how does this link in with affordances? I think there's something really interesting about that niche and the affordances in a niche because it gives away, because it's sort of permanent. The path could still be there, the cups can still be there. Yet my interaction with it can vary. So I think that that's a really interesting way that experiments can tie a lot of this information or knowledge together. Sure, so one idea we've talked a lot about is the idea of attracting sets or attracting states. So for the perspective of your body with regards to temperature, you want to be attracting back to core body temperature. Excursions outside of that range are not just bad, they're lethal. So attracting states are really important. And just like you can have attracting sets of states on a single number line like temperature, you can have in the wandering itinerant dynamics of thought, you can have attracting thoughts. And you can ruminate and you can get too attracted or it can be too precise, but it's an idea that people could be familiar with or not that attracting states can be thoughts. So let's think about active inference, the framework as in attracting state as something where we can in a room bring up these ideas and refocus the discussion or potentially expand or zoom in. So we've brought up this multi-scale and the deep past and the deep future. So it's multi-scale through space like the nested systems and then there's multi-scale through time. There's molecules vibrating and then there's longer term things. So let's start with the multi-scale space and time just being our starting position. And then active inference goes further. It says not just is it gonna start being multi-scale in space and time, but there's gonna be non-linearity and uncertainty because there are causes in the world. So RJ and his alarm clock and his dog and something that you wouldn't expect changed in the world and that changed another part of the world that you basically wouldn't have expected. And also our action is always gonna be constrained by our situational affordances in the niche. So to Steven's point there, which is like it's all good to think about objectively what's happening but constrained by affordances in the niche that we're actually in is the operating conditions we have. So it makes sense to at least ground the discussion in the affordances and then we're presented with a tracking set of questions like how will we work together to be agents in a system of the type we would like to see? That's a question that in various settings people could say, oh well, we don't have clarity on what kind of system we want to be in. We need to reduce our uncertainty about what kind of system we're in. That's visioning and clarifying mission. But then there's other times where people say, yeah, how do we be the agents of this kind of a system that we want to be in? So we're not uncertain about the mission, about the end cybernetically, but I'm just like honestly confused about what next role I should take in this system to make this happen according to this mission that we've laid out. And if my actions are consistent with the mission as we've laid it out, then they're good. But if they're not, then they're just not helpful. So we can start to combine a lot of these ideas and realize hopefully that whether we go deep into the physics side or into another aspect of active inference, we'll use it as an attracting set and as this range in phase space where we can bring the discussion and help ground it in the multi-scale insights in the non-linearity of the world and a few other things. But I think it's really, yeah. And they really are, they really are that, like we start our diagram with entities that have priors. Right, we're not starting, I mean, you could start with no priors and you're just wandering around in the dark, in the woods. But we're starting with a system that has something that we comparing the reality to, right? So they're trying to minimize the difference between their expectation and reality. Is that the free energy notion, right? So sculpting expectations at the front end is what education is and experience, right? And so with, and when we can, our system is our educational system in the United States is set up after the based on the Prussian school system, right? That's the de-schooling society book by Illich. So where our expectation is industrial workers, well, now we don't have industrial workers, only we have service workers and so our educational system are not serving our folks, for instance, right? So we have these, those influences. It's, I guess the thing that's, the thing I keep coming back to, it's like a situated cognition item that we situate our cognition in externalities. Well, those externalities also have our entities that have Markov blankets, you know, right now we're thinking together, right? This is like Plato, you said, you know, dialogue lets us query consciousness. So we're doing that. And each of us has our own view on the direction of this thing, right? But they're, so we're not interacting with static things. We're interacting with dynamic things that have their own dynamics and the recruitment of those populations is, I think a lot of what happens in legal concepts, it's what right now, I think this, the things that we're doing right now could make a big impact on ethics discussions because ethics, anything that's not written down in law but is bad is now called an ethical problem, right? It's just, ethics is carrying a lot of water right now for anxieties of people generally with the technologies and exponential increase in everything. So it feels like we have an opportunity for intervention because there is a vacuum of meaning in the current institutions. And it feels like this kind of explanation might be a nice way to get a toehold there. Let me build on that, Scott. So there's a book I read called Legal Systems Very Different From Our Own by a fellow with the last name of Friedman, but it is not myself or a relative directly. And it really does a great job of capturing how different legal systems from informal, like criminal or prison legal systems, justice systems, using the term quite broadly to different world traditions. Very, very different processes that they followed but also they had different ends in mind. Now the current operating environments, not just are we being confronted with different interfaces mixing that hadn't combined before, but also there's new challenges. And so that's sort of like there was a two-stroke engine and a two-stroke engine and a two-stroke engine and they all were kind of coupled in a way where they all were working internally. And now they're being cobbled together and there's a lot of heat informationally, so to speak. There's a lot of parts that can burn because there's so much heat being wasted with the interfaces. And so can three two-stroke engines be simply rewired into a six-stroke engine? Well, no, it doesn't really work that way. How can you link those engines together into a system? And that's really the question we're addressing here is narratively, linguistically, mimetically, but just formally as well. How will these different systems get cobbled together so that we're actually connecting the gasoline to the car moving in forward one direction and in one piece, not taking gasoline and making something that's just combustible but only hurting people? Yep, this is where I suggest that people read Flatland because it's the adding of dimensions, right? You add the Mr. Sphere comes in and lifts Mr. Square up and he can see through, you can untie a knot in the upper dimension without crossing the rope, right? So in law, what we do is you create a new dimension. So there was a time when we didn't have credit cards and people used to do run IOUs at the bar and things like that. It just didn't exist. And so credit cards now serve this huge function. They de-risk huge swaths of interactions, right? But at one point they just didn't exist. And this guy, Dee Hock, wrote a book called The Birth of the Chaotic Age and it's kind of a crappy book but it's kind of an interesting notion. The banks were like, no, no, no, no, no, we don't want your credit cards go away, get out of here with a paradigm shift. And then eventually they got the credit cards got in and now try taking the credit card fees away from the banks, right? So what was impossible becomes inevitable. And it feels like because things are so stuck, this kind of approach that we're talking about when taken in markets and institutions is gonna that extra dimension of credit cards or swaps frees up, it allows for these interactions to take place just like the extra dimension in Flatland allowed for Mr. Square to get new perspectives on his life and his existence. So it's a really, it's a fundamentally different and that's why it's so interesting. The Markov blanket nesting gets really interesting. Which Markov blanket can nest within another one? And when you have communities of interest, one of the things I always remind people of in New York, for instance, you have little Italy, you have Chinatown, you have different areas where people go together because their Markov blankets, their language is easier, it's easier to communicate, lowers the cost of communication, lowers the risk. You also have the jewelry district, the flower district and the cookware district. Economically people cluster together. So communities of interest, they're co-locating. Well here, if we look at your diagram with the four states, the two states and the left two states, right? Everyone is now going to communities of interest based on the dynamics or rigidity of that left-hand two states. Oh, do you share a risk? Do you share a solution? But what if we could have communities of interest accumulated based on their capacity for change instead or something like that? Just to raise one really important piece then, Stephen, is when people align around on the left side of that figure, reducing risk or shared policy or shared culture, it's often a similarity clustering, okay? So people often cluster based upon similar needs, similar understanding of the world, similar affordances. But let's imagine the reality of the organism, which is that actually it's a clustering on the opposite in some senses of disjoint abilities. So in this context of the body, if you said, well, there's one kind of cell, it can't move, it just holds a bunch of fat and it literally just does hormone and fat processing. And there's another kind of cell that will just dry out in a second in the sun, but it can contract once. And there's this other kind of, and it's like, you'd say, they're not gonna live in the same neighborhood together. It's like, oh, wait, no, they do, that's the body. And so how are we going to interpret this all the levels of what it means to be a person economically and culturally and psychologically and understand that it's neither simple affinity nor simple disaffinity on any scale that will lead to function. Because function and islands of function are rare in the almost unimaginably large state space of possible legal codexes or possible programmings that could be done or possible personalities. It's like, there's more ways to fail for far from equilibrium systems. And selection is the hand that just keeps on sweeping the ones that aren't working off the table. So if we don't wanna get swept off the table, we're gonna be working together. And that means coming together across our differences so that we can have these non-linear successes as larger systems, not the largest system. It's not a simple imperative, but at the largest level that accomplishes our mission. So just such interesting topics, Stephen. Yeah, and this, well, just like you were just saying there about these different cultures and the niche construction, I think this can help. There's a lot of problems in terms of discourse around this, not more broadly in the world. And like when we talk about systems out there in like cities, but then well, how much is a system and how much has been an evolving niche construction, some of which is constructed in this kind of linear, you know, this kind of teleological strategic way from above and geographical way but also some of it's just because that's the way it's evolved over time with some of this, like we say, these dynamics, which I think haven't really been brought in. Or if they are brought in, that kind of ends up in the arts and it all gets framed in whichever metaphor a particular sort of school is applying and that you can get trapped by that. And I think this actually came up for me today. So just like you're talking about the indigenous ways you could have with the government or other ways is I think that we have this idea of systems and these different systems and how do we integrate them? And is this assumption that they, we need to be integrating them but like these indigenous systems, and it flips from one to the other. So for instance, we have our Western system which seems to be like, and we need to attack our system is quite a sort of a progressive leftist approach. Or we need to all revert to some sort of native indigenous system and they had it right and the body knows everything can just listen to our body. And there's this kind of gap whereby, how do you find a way which is kind of maybe configuring different approaches? So it's like, okay, well, where can we configure the other ways of knowing other ways of governing other ways of, and it's not just a case of like mash them together because they may just have a completely different dynamical process. And then they don't integrate. They just do different things at different times like cells do in the body. Of course it's all gonna come down to the details. So, none of these metaphors or questions are even the beginnings of the answer. But yes, that's definitely where we want to be understanding at that level of societies is how much formal integration will there be? Now, if somebody says, I have, I designed the interface for different operations, that is still coming from a cultural context. So the challenge and part of the quest will be to frame that interface in a way that doesn't pretend that it has the neutral arbiter ability because that's actually one of those universalist fallacies that we're falling into by abstractionists. Now we want to be grounded in action and still talk about an interface for action. And so I think actually that's why the task may be easier than the thought bringing together could be the action bringing together. I'm always reminded of Scott's example they told me a long time ago with two lumberyards that are competing in the market for selling lumber, but then they have the same insurance. And then I was like, wait, they also could have the same internet provider and they could pay taxes the same local government and they could have the same lunch service. It's like everything except for lumber. So it's like, how many fronts, they might even share offense, so many areas. So they're almost collaborative on more fronts than not. And so how do we take this understanding that on overwhelming number of our edges and even for each edge on most of the domains, there's just massive areas where we're in alignment. And then risk can be catastrophic. So it's like, if we just reduce the amount of cascading failures in the system by playing to our strengths in the areas where we are aligned, maybe all of a sudden now the lumberyard people, it's like, oh, you like this kind of wood. I like this kind of wood. We should just niche partition, but we're not gonna go niche partition and fork and never talk. If you're gonna be selling the luxury and I'm gonna be selling the consumer wood, we need to have a better communication interface. Oh, you have employees who speak this language and I have employees who speak this language. Well, let's make an agreement that we're gonna send numbers back and forth in this file type on these dates. And here's who we're gonna go to if that doesn't work. It's like, wow, these two companies were fighting on the market in a previous or a different paradigm. They might have been wasting money on advertising or committing different sort of gray zone interferences with each other's operation. But just by reframing it in this multi-scale system, even if you're not doing a government change, whatever government system there is, those two lumberyards now have a new way of talking to each other, to their employees, with their lawyers, whatever, potentially in a way that could lead towards them working together better. So that's just very powerful. I think that's what will be cool to continue returning to. And we're moving there really from a zero-sum game to a non-zero-sum game, right? There's a winner and loser and what you do, that's why I always think of that. You add the dimensionality and it recasts the game as not a winning and losing game. And that's what we have to do in GAIA theory, right? If you wanna save the world from us, we have to recast the game as the entire game, right? The Sustainable Development Goals, even the UN has them, they're even tripping over them because they have some, they had one power plant, they developed the Gujarat power plant in Bangladesh and they scored big score on energy, SDG and on development SDG. And then they got sued by the local fishermen and the local population for pollution, which was screwed them on the other SDG. They didn't even talk among the SDGs at the UN. Now that's not surprising if you know the UN and the institution, but it shows you that it's hard to integrate. Even when you have the same source of the norms or the aspirations, they just go for it on one of them and they're not talking to each other, right? And here's why preventing failure is so critical. Which body is healthier? Each organ system at 90% function or most of them at 99 and one is at zero. Well, that body is dead. So all of the functions need to be adequate or better. And we need to improve adequate, improve resiliency, improve accessibility, et cetera, but preventing failure cannot be just part of the discussion on reducing risk. Reducing cascading failure has to be an imperative. And so in that framework where we want multiple components to be functioning, it just reframes the, oh, should the government be interfering here? Or should this person be undertaking this policy? And again, like I said, the task to synchronize on action might be easier than thought. So those two lumber yards, getting them to go to the same religious institution or send their children the same education or listen to the same YouTube channels. Why? Why even go down that road in the metaphor when you could just offer them a service where they're de-risking their lumber business? So instead of saying, we'll synchronize these lumber yard people on this and then they'll just act nicely together. It's like, yeah, or you could just provide the interface so that whoever they are, how everything's changed, they'll be able to play nicely together. Instead of just saying, well, if we synchronize them in this way, then they'll know what to do. They won't because people don't. And that's why one of the things that I've talked to people about as I say, our recruitment that I'm doing in the program at the lab is not based on selflessness. It's based on self-interest. And the assertion is that they can de-risk in ways they cannot do unilaterally, period. That's what I tell people. And so you get together. So it's a commitment if they wanna de-risk in these new ways, they can't do it by themselves just like a stop light. I can't, I didn't send out around a memo saying, let's all stop with the red light. That was done, right? These are scales that we can't do unilaterally. And so that tempting, that bait of saying, you can go over there and like it was swaps. Australia swaps were outlawed when they first came out and the banks in Australia were getting killed because they couldn't swap out their interest rate risk. So they lobbied the Australian government to allow for swaps and then they did allow for swaps and they were able to swap out their interest rate risk like other banks. The downside is they all then got hurt in 2008 when the swaps were used for speculative purposes. That's a different problem. But that invitation to de-risk in ways you can't do alone, it's kinda, it's a crass way of getting people in the space. But to me, it's the hook that joins that left-hand side and the right-hand side with those four spheres. Because on the left-hand side, you have this characterization of risk as a zero sum game. On the right-hand side, well, maybe it's not a zero sum, non-zero sum in that context, but it allows for a bigger dimensionality of de-risking increases when you increase the system size. And so if you get those people who are similarly situated together, it's new markets. That's what trade associations are, right? Trade associations, people are competing all the potato chips, lumber, it doesn't matter. Trade association competitors get together and where they do share information and insurance, those are the two big things. They co-insure for whatever risk they share and they say they can buy insurance cheaper, of whatever kind. And they also share information so they can understand situational awareness better. Yep, I put up the slide here again. Yeah, Yvonne, do you have anything you wanna add or remark on? No, just chilling. Yeah, okay, continue, Scott. Wait, weirdly, I think we just lost Scott's audio in between. Oh, no, I muted it. Okay, okay, sorry, continue. But I was thinking, I do some other thing I was gonna say but I can't remember now about that. Oh, yes, yes, this is an example from the fisheries industry. So this Eleanor Ostrom work very closely with people in the fisheries. So a lot of fisheries are designed with Ostrom notions. And so of the commons. And one of the things that a friend of mine who works in the, hold on a second. Okay, Steven, any? Okay, sorry. One of the guys who works in the fishing industry, he said, they had a problem. They had this thing called bycatch and it's the fish you're not supposed to catch. So let's say I'm gonna make up these numbers in the species, but let's say you're allowed to catch 200,000 pounds of flounder but they would orange rockfish or yellow rockfish. And if you catch four pounds of that, you have to stop fishing. It's a protected species. You can't fish anymore. You can't pay your bills. You can't buy the oil. You can't send your kids to school. You don't, you don't have money. So what they did, they were trying to figure out how are we gonna protect this orange rockfish or yellow rockfish. So what they did is they said, okay, let's pool the allocations of different fisheries. So if they had three guys, each had a four pound allocation so they can pool it and make a 12 pound allocation. So they can, then what happens is if one of the people goes out and fishes and catches five pounds of yellow rockfish, he can still fish whereas he might in the past have had to stop. So he spread the risk out. But then what happens is the other fishermen start to be policemen because they're cutting it to their allocation. So they go over and they say, hey, hey, hey, use a different size net or don't fish in that area or whatever. They share information because he's cutting in, it's an economic loss for all of them because they pooled their allocation. So to me, that's a big deal, that idea because what they're doing is they're spreading risk. And when you spread risk, it's like insurance. If more people have claims out there and my premiums go up, right? So I'm gonna encourage things that makes things safer, right, as a stakeholder in that system. And so similarly here, it feels like when you move to that right side diagram, we're gonna have more avenues with which to create information flows among the parties that will actually de-risk better than the left side and make it cheaper. Steven, and then I'll have a point on that. Yeah, I think this use of this diagram could be important because in a way, I suppose this diagram on the left, it's like you either just put in laws or to mitigate those risks and challenges or the transdisciplinary tool that we have at our disposal is money. Money is the thing that everyone uses basically as the transdisciplinary allocation of what to do on projects. And it's highly problematic and there's not really a lot of other things apart from what budget gets allocated that allows that. So is there a way that this can provide us and at different scales? Because sometimes, yeah, it's fairly clean at kind of a national level or you have your rulers level and the individual but most stuff's happening in between as we're talking about. How can we somehow get a handle which is not too culturally metaphorically constructed but it's maybe relatively universal. I know it's about to use that term but I think there's something in that in being able to say, okay, this is how this information gains happen in. This is how this, you know and now you're starting to have a way to talk about dynamics in ways which aren't normally possible. Yeah, one thought on that. And again, to this point about you wanna design the nexus, the interface in a way that is inclusive and accessible but respects that not just do people disagree but they have actually different visions and understandings about this whole system that we're in. And so here's where we can tie it to this figure and how it could be used to communicate and maybe even organize around. So on the left side, it leads towards abstraction because it's like, well, let's wait until we're less ambiguous and then once you have no ambiguity, it's like, well, let's figure out what the least risky policy is. But guess what, all the policies have risks and even if there was one that somehow had less risks than strictly all the others there's still blocks one events and there's still people with different preferences for a given pipeline for oil or for a given fishing agreement. So the idea that you're gonna minimize risk for all parties in a way that they're satisfied with is gonna be this quest that you're gonna be at the drawing board quite probably forever. Let's contrast that with the right side. And the authors I hope will be okay with us taking wide latitude because we're taking this figure beyond what it's frame but that's what's fun, right? Yeah, now let's go to that drawing board with the power plants and the fishers and the Ostrom and everything like that instead of just getting frozen at the drawing board left. So now on the right side and we're saying, look, we are at the drawing board. So whether we're speaking to a translator or you believe that we have different interests we're here at the Nexus. So universal aside, what's locally gonna be working for our system? Well, we have two imperatives. We wanna make maximum information gain but we wanna maximize if you look at that ISO given P it's not information gain on the observations. That's this part. See how it's strictly in the O? Here's ambiguity about state observations, okay? This is maximizing information gain on states and observations given policy. In other words, we're not just reducing our ambiguity about states the world like give me a more accurate stock ticker. This is like, hey, what's the most informative policy that we can enact to learn about the states? So we wanna query the markets in a way that isn't just getting increased precision on viewing them. And so now we're all at the table together. So whether we believe ourselves to have or actually have adversarial interests in one dimension of 300 or all the dimensions or whatever, we have an expansive maximization growth oriented goal together which is implementing policy that helps us understand our system and the mapping between the hidden states and the observations better together. Now by doing that, we're also engaging in joint action which will prepare us for larger and better action. And what are we doing joint action to do? Well, part of it is to be consistent with the kind of system who we believe ourselves to be. But the other part, the minimization what we're running away from the stick is disorder with respect to the observations. So not with respect to us getting paralyzed about which policy but we wanna reduce our surprise about our observations in the context of optimal policy. Okay, so here, how much CO2 is there really in the world and how much do we really make and what would be the riskiest policy and should we just remove gas cars and I mean you could go down this rabbit hole forever, right? Here is, here's a predicted trajectory. Here's different types of models and what they predict about the trajectory of CO2 levels through time. And we wanna reduce our uncertainty about that model. Together, all kinds of people who believe different things we're gonna reduce our uncertainty purely about observations as we jointly use policy together to learn more. It's just such a positive way of framing it. And again, people are still gonna disagree but we're coming to the table together. So as long as we're not at arms and we're coming to the table together we don't need to go universal we don't need to go culture free. We just say we're here we're figuring this out today for ourselves now and for other people later. And that's beautiful. I mean, I can't wait to live more facility with the concepts because what you're describing the processes you're describing are going to become governance. It's not like we're gonna find governance of the processes those processes will be governance that's gonna how it's gonna happen. And so it's a very, the question is we're on the left side how do we get to the right side? What are the stages that we can do to move us over there? How do we prepare ourselves as groups to be able to function on the right side in a natural organic kind of way? One of the things that I've been advocating for is collecting practices from the left side but then having groups stare at the practices so you can say, okay, it's basically making different changing the assembly the disintermediation of the left side has been done by the internet, it's done by COVID is done by the big challenges to those systems they reveal the limitations of those systems. And so the disintermediation is done on the right side we're gonna re-intermediate because there is still mediation of the individual components in the system, right? They're brought together but we're re-intermediating with a fundamentally it's fundamentally different in some ways and some ways is quite similar. And so one of the things that I'm interested in is trying to find ways to make it so similar that people don't even realize or they're comfortable with the move to this new framing of analysis. Very cool. Yeah, Stephen. And this ties in as well. Actually, this is so good that you've gone through this diagram so it makes it very much jump out but this is like a safe to fail approach of trying to... So if you imagine that policy will the red wheel on the right-hand side active inference knows that that's really our choices that are what we do with our policy. We a lot of what's out there in the world, the hidden states, we only have so much control over. So if we're able to almost shift the policies in there and knowing that they're safe to shift so these safe to fail experiments which is what they talk about with. So if you do safe to fail policies which means that you can then observe the states, observe what observations, observe the overlaps, observe the dynamics over time, then you've got an awful lot of new information that you can use. And obviously the key is to not to have something in there which will be, there's this anti-fragile comes into that as well. So you can learn as you do and obviously knowing that there will be certain times when you've got to pull away and stick to a risk minimization focus. But this gives a way to talk about that probably to people from very different views of the world. Yep, so another metaphor there with performance would be like a martial art and the safe to fail and the sparring and the play. So there's times to play and for children and there's times to be serious but no matter what you're doing, it's always serious because people can fall. If you're walking, you can fall. If you're talking, you can say something mean. If you're holding data, you can lose the data. So everything is very important and so we need to have that resilience in mind. One other way in which this can be used, so this is putting a little bit beside this purely like somebody at the table who doesn't want to communicate with us scenario. So now we're in a situation where like people want to work together. At least they've bought on to that level, okay? We're talking about the system and someone just has this figure up. Just as they do. And Steven, you said it, like what we control is policy. We control climate policy. You know what? We don't control the climate but what do we control our climate policy? We don't control biogeochemical cycles. We influence them but we don't control the outcome. And so people could be talking and someone's like, well, I want to see this. Oh, okay, you're talking about C. You're talking about your preference vector and someone said, or are we talking about the state of the world? Are you actually saying that this is how people feel or are you just stating your preferences about the world? Fine, if you want to, I want to learn about your preferences and remember your preferences are the key factor that really go into our policy but our policy is not the same thing as the state's about the world. So it's like, okay, now what are the observations? Someone says, no, no, no, actually it's 80%. So okay, your inference is that this is 80%. Is that right? Well, here's this observation. Oh, well, this study is biased. Okay, well, great, what if it was a factor of two? So there's so many ways just to have this figure up and someone says, are you sure about that policy selection? Say, okay, well, it sounds like we're up here. Sounds like we're in the metacognitive world. We're thinking implicitly about our own groups generative model of this policy decision making process. And it seems like you're uncertain about this and what are you uncertain about? Okay, what could we reduce our uncertainty about? Is it your preferences? Is it the way that your preferences and our uncertainty are being combined with the affordances and the niche into policy? That, of course, is the hardest not to untie. That's control theory and everything. But you can at least say, hey, is it your uncertainty? Is it your preferences? Is it the niche perception affordances? Not just the preferences, but the niche itself, E, the prior, and is that being combined in a way that makes sense to generate policy? Because if you disagree with the policy, which part of these was it? Because that's all that went into it. So don't just say you disagree with the policy. Tell me, do you value something that wasn't valued by this policy? Okay, now that we're talking about a given policy, do we have synchronization, at least coarsely, on the starting state? Not on all the details, but at least like the starting state for our scenario, and are we talking about states of the world? Do you believe that our policy is gonna influence the, you think that the cutting the miles per gallon to 20 is gonna change the way that the hidden state, CO2 in the world changes through time. You think it's gonna do that? Well, I don't think it's gonna do that, or I think it's gonna do this instead. And so now we have an interface, okay, policy is how states of the world are changing, not just as a function of models of prediction, but as a function of our interference in the system. And it's gonna be manifested in observables. Not just we believe that 80% of this will be happening by this future time, but rather as a function of policy, we're gonna see these observables. And if we're surprised by the observables we see along the way, we're gonna be course correcting, and we're gonna be at the same table again. So there's so many actual structural ways to talk about this, I think, and as well as narratively, that people could, even having this up, maybe it could be simpler than this one, maybe it could have a few more pieces, but just separating out where people disagree. It's like, yeah, in a legal, I don't know. Yeah, in a legal system, I could imagine this being useful too. Oh, it would have to be recast is something that people wouldn't fall out of their chair because legal people don't like to see squiggly Latin and Greek symbols. But the absolutely, no question. The way you described it, what you're doing is you're really mapping, I mean, it's teaching and showing people, whenever you have a map that says you are here, when you're lost in the zoo or something like that, and really it's a you are here map, right? It's a CGE, it's allowing more granularity and it's offering more dimensionality for solutions, right? Because if people are just talking about policy, they'll be ambiguity if they're not talking about CG or E. Yes, what are your preferences? C, what are the field of affordances? E, and the prior over those, like the quote nuclear option, it's an affordance and it's the last resort. So tied up in the affordance is how likely it is. That's your prior over policies. So you have the field of affordances and the prior about them, and then you have your preferences, C, I wanna see a world where there isn't radioactive fallout, and then you have your uncertainty. And then those get combined in a way that is formalized with G. And that relates to PI policy selection being the outcome of the free energy being minimized, which is again, what's the most likely policy? And someone's like, wait, most likely. It's like, right, but most likely under the model of the world that you believe will happen, not just the one that you think is like the run of the mill because quote, the run of the mill is not run of the mill. Yeah, it's really interesting because I mean, you're integrating subjectivity into an analysis of the reality in a formal way. It has that, I believe that this is where the dual instrumentalism is the most powerful thing in the FEP zone. So the dual instrumentalism, again, it's also kind of like a Bailey and Mott, I don't know if that's being mispronounced, but where there's a strong and a weak form of an argument. So again, the strong claim would be this is what systems are and this is how they're operating, that is what a bacteria is, that is what the Markov blanket is. But the actual Beaker claim is just, we're gonna use it as an instrument. It's like, we use linear aggressions in time series modeling to model GDP through time or regression of height versus risk for some disease. It's like it didn't say people are linear regressions by doing that. It did lead to a world in which certain types of conclusions were reached, but it wasn't that people were linear models. It was actually consequences. So this is just a different statistical model. And as we're exploring today and we'll continue to explore, even qualitatively it helps structure messy discussions about adversarial relationships. And in that way, which we're only barely scratching the surface of like really formalizing the Markov blankets around companies and things like that, having this type of a structuring and being able to be disambiguous, but it could be so much more. It could be translated between languages. It could be versions. It could have impartial trust providing groups. There's so much that could be done there. And it's funny because so one of the books I have on the shelf here is by Hanuman. It's called Doing Money. And it's about how money is a universal risk consolidator. So you get money and you can take care of your other risks basically. So one of the things I've been wondering about is, well, money's kind of turned into data because you don't really spend cash anymore. So why can't data itself without being monetized be a universal risk mitigator? Stephen, you tracked me up when you said that money was transdisciplinary. I've never heard, you know, people say that transdispy research, but I've never, it's so true though. The departments will pay each other in money. They won't cite each other, but they'll pay each other. So it's so true. And so that, but data, can data now be and can this system, because money, you go to an airport, you got to convert one system into another. So there's different encoding of the bills themselves, right? Okay, so that's, but ultimately I always tell people, if I walk out of, if I'm outside of courthouse and someone says, I just got a $10,000 judgment. I don't know if it's for a divorce case, a slip and fall. I mean, it's money, it's $10,000 is identical. You can buy the same amount of groceries, but it could be for various different things that you were compensated, right? So in data, it feels like this model might allow us to have data be a medium of exchange and maybe storage. I don't know, I won't go into that right now, but a medium of exchange of risk without having to go through the monetary loop. Very, very interesting. Yep. I can kind of imagine some areas, Stephen. And this might be where bringing the body back in can help too, as we, you know, you talk about them, like there's money in my hand. There's money that we talk about when you get it or something. And then there's capital, which is almost like an augmented cognition process, which is up there in the markets, you know, that we've offloaded onto and is very hard to embody what's going on up there. It's like, it goes off, it does its calculation, it comes back and we do, I mean, half the people doing it are doing on screens, who themselves are being proxies for other people, you know. So there's a question about where this comes back into ultimate sense-making at different timescales. Yep, yep, you know. Ivan was smiling before. I didn't think he was smiling at the money thing, Ivan. Let's maybe have, yeah, let's have anything you wanna go there, Ivan, if you want. Otherwise, let's have a little closing. Just imagine to you and you and your money and your money of new people. Yep, so maybe we'll have a final set of thoughts, so prepare that. But the other side, Scott, of the money trading in the airport is that in a healthy system, every sale is bi-directionally consensual. Every agreement is not feeling forced. It's like, so somebody wanted your Peso, the other person wanted the yen. So they made a deal, they met in a marketplace. They wouldn't have found it on Craigslist. The operating cost would have been too high, but somebody wanted it. Now, when does it become deranged? Well, when there's extra market forces that force people to do things or when there's devaluation and market manipulation, et cetera, et cetera. But again, it's not like there's a pure untrammeled market out there. And then it's like, oh, regulation, how dare they? It's actually, we construct the market. It's like, we're building the bowl. So it's not just like there's a pure bowl and then you've defiled it by putting clay on it. It's like, no, what is the structure that you're going to be interfacing in and can that not just be imbued with our values, which aren't gonna always have justification. They're often abysmatic. But can it actually allow diverse agents to act according to their deeply held axioms so that there is ability of people who have different faith in different things? And it's been a joke for so long. Oh, you don't need to believe in the legitimacy of the government to use the currency of a fiat. And now we're just seeing it in a new light. So. That's really interesting. The market may be just an emergence from the fact of information seeking in trades. Yep, and also leveraging and how that's trust. There's Bitcoin on the blockchain. These are the examples I think about. But then there's the 10X leveraged Bitcoin. Who legitimized that exchange or that person to offer a 10X leverage or some other type of asset based upon leveraged on Bitcoin? Nobody gave that permission. If that person or that operation is reputable and to whatever extent it is, it will succeed or hopefully, and again, anything other than that is dysfunction. So there's a way to talk about that in a way that is pretty different from the reward optimization of different competing agents. That's previous paradigm. Reward optimization, competitive agents. And we're not just moving from competitive to cooperative. It's actually not that simple. It's actually more about from the reward to the precision guiding mindset and that everything else falls in line after that. And the action and the precision are the key pieces. Some of the key pieces. And so that's the coherence lending factor. Yes, we wanna be precise about process together. That's the minimal meme. Not we're gonna have abundance in the future, but today we're gonna have precision here. Yeah, so it's really, yeah. It's a different, it's really fascinating. The way it's gonna iterate itself into, it's more scalable, but it's not yet scaled. Yeah. I think of, it makes me think of adaptation studies. I have a book on Cambridge, book on adaptation studies in theater, where if you take Romeo and Juliet and you translate it into another language, you got a lot of work to do. Cause there's a lot of phrasing and there's a lot of behavior, a lot of things for adaptation. What we're doing here is it's just like a giant simultaneous adaptation vehicle. Because everyone's coming in with priors and this system allows people to discover things that they have in common. It allows them to act in common. And that's gonna have such huge power. And it won't be something that will create spaces for it. It will create the space. The interaction space will evolve and then be. And then it'll be parameterized in metrics and you'll know if you're in it or not. And there'll be certain behaviors that indicate you're in it, right? But it's, that's why I always say the process becomes the governance. It's not, you don't, that's a lot of people. There's the AI problem right now, not our AI, but the artificial intelligence that everyone is trying to create everyone that I can find is trying to create systems to govern the people creating the AI but not the AI system itself. And I'm thinking more along the lines of Asimov, which is you need the rules for the robots themselves. Similarly here, each one of these systems is a robot, not programmed by us per se, but it's caused by us and it has features that are the result of our desires and needs. So it's programmed in that way. Corporations are robots, I think they're programmed. Same thing. Part of our constructed niche is how we would consider it. You're never gonna break away. It's intertwined with us, it's in our niche. Yep, go on. Right, right. And so the, so it's interesting because I think a lot of folks look at authority, like the fiat money is, I think the US is a fiat, not just because of the rule of law, but because of a monopoly of violence. We have the most guns and bombs. And so that it's a violence and imperialism and all that stuff that's wrapped up in fiat, right? There's a power there. And I think the power is shifting from physical violence to other forms of information. I won't use the word violence here, but information authority. And it feels like the ability of this system to declare that it can get you closer to reality continuously. It's dynamically, it's like the scientific method. It's dynamically rejiggering itself. I wouldn't say closer to truth. I would say towards a better process together. Yes. Dynamically. That's right. There's no, there is no end state of truth. Yep. But that value proposition is not characteristic of the current systems. And so it has a unique feature of, it de-risks in a way that's much more effective than the current systems. It requires people to do the cooperative thing, which is like I said, with credit cards or ATMs or swaps or anything with mitochondria. Right? You know, your free existing things have to come together because it's a better deal to be had, right? So that's, and so that, so the convey, are conveying why there's a better deal to be had. We can use the existing problems in the world, climate change, war, water. It's not even a better deal. It's a better negotiating table. Yes. And that's the, that's always the reframe is always from there will be better states to we're going to seek precision today. Yes. Beautiful, beautiful. And that gets back to that now versus past and future thing that I was struggling with before. Right? We're not going to, we're not saying it's better. It's a better way to be. And we don't know where being takes us because that's, we have models of it. You have a model, I have a model, we have a model, let's get together, figure out where we want to be. The reality is the synthesis of those desires I guess as evolving through time. Closing statement from first Steven, then Yvonne, if you want, then back to Scott. Well, yeah, I suppose the takeaway and there was, you had the E, the G, what was the third letter again? There was C is the preference. Okay, let's see. I think that's really useful. I think that's like a nice meeting point there between, and I keep coming back to this, but that ergodicity and non-ergodicity in the sense that, you know, as you get down to the smaller scale, the body, and I think even when you get into this outside of cognition, everything's got to follow that growth, kind of ergodic kind of churn. And then you start to have this ability to bring in these non-ergodics as we get to larger time scales. And, you know, we even see as we get now to the state and how you can borrow money in ways which an individual can't. We've seen this with the EU because they're not constrained by lifespan in the way that mortage is on. So you've got this weird ability to bring in preferences and that about, of non-ergodic sort of ways of just making a choice and creating deliberately steady state, like a fear currency. The whole point is it's steady and non-ergodic. But we actually, and this is where I think a lot of the work with the indigenous cultures is they are, they could say they're trapped in the old, cynical way of understanding the world. But actually that gives you a way to tune into your deeper existential way of knowing about the long-term future because that is where your phenotype knowledge comes from. Not, you know, okay, we have these structures that give us this ability to also do deep temporal depth, but we do lose the ability to integrate free energy in such a new, the dimensions come down and down and down and down to these kind of artifacts. So yeah, this is, anyway, this is what is coming up for me, thanks. Well, I hope, Steven, that you and others can, in the future, you know, listen back to what you've laid out and hopefully see it in a broader perspective what you were hinting at, Yvonne. Any closing thoughts for 2020 indeed? Yeah, sure. I just want to thank you all for brilliant discussion and it's the great discussion for the end of this strange year. I hope to see you all next, next from the beginning, from the January. And thanks for your help in the project from the beginning. Really critical. Thank you. Yeah, so very chill. Scott, closing thought. You guys are, this is beautiful. You guys are beautiful. This is awesome. I'm, I feel like a metamorphosis. I'm gonna pupate a little bit and then I'm gonna come out after the new years and be ready to fly with you guys. This is really delightful and fascinating and I think it's gonna be really very helpful to a lot of problems that are out there in the world. It feels like there's a lot of solution space that gets opened up with this and in introducing it in digestible chunks. So I always talk to my clients about where are you now? Where do you want to be? That's a tough one. And now I see why it's a tough one because it's in a way the wrong question. Where do we want to be individually and then how do we synthesize? And then what are the ways to get there? And then you triage them. So that's a standard thing I used to use now that's changing because it's a, the exploration of where we want to be is the answer to that really hard second question which is people have difficulty answering. So thank you for inviting me on the journey. This is great stuff. So to close on that, Scott you just, you laid out a helpful way of thinking about it with just where do you want to go? This is common, goal setting. Who are we, where do we want to go? How are we going to get there? But you brought in two pieces, which is the values. The values are what is not explicitly described but they're part of the preference and the policy. And so combining all of these disparate threads together and not having one heat engine for the refrigerator and one power plant for the other one but combining all of these threads together so we can have a discussion from multi-perspective. So that's the intercultural component, multi-perspective but also bringing together these co-extant values that we have for system function, exploration, so many other features. But really my closing thought is for this live stream that tomorrow probably I'm going to record a video called 12.0 that is going to be the clock striking midnight on 2020. And then we will indeed head into a very exciting 2021 and beyond. So both of you, well actually all of you I guess are RSVP'd for the Active Inference Lab. So I'm just really looking forward to those three projects and just like you said, it's got with the digestible bits. The live stream has been our learning process. We've all learned so much along this journey but now we also want to be thinking about could make shorter content, ones that have interactive parts, who knows but there's going to be yeah, great things happening ahead. So thanks everyone who's listening live or in replay. Thanks guys for participating. Fantastic and happy new year everybody. Yeah, happy new year. Cheers. Happy new year. Wow, what a discussion. What a discussion. All right, well, I'm going to terminate the live stream. So thanks everyone and I will talk to you later.