 321. Hello and welcome to Active Inference Lab, live stream number 27.0. It's August 23rd, 2021. And we're going to be discussing the paper, Active Inference, Applicability to Different Types of Social Organization, explained through reference to industrial engineering and quality management. So welcome to the Active Inference Lab. We are a participatory online lab that is communicating, learning and practicing applied active inference. You can find us at the links here on this slide. This is a recorded and archived live stream. So please provide us with feedback so that we can improve on our work. All backgrounds and perspectives are welcome, and we'll be following video etiquette for live streams. At the short link, you can find the roadmap of the upcoming discussions that we're going to be having. And these are going to be group discussions on August 24th tomorrow, and then August 31st for number 27.12. And hopefully the author will be joining us for both discussions. And we have some openings in the coming weeks, if you want to join for a session, or if you want to recommend somebody who should join today in active stream 27.0. We're going to be learning and discussing about this paper with the aforementioned long title by Stephen Fox. And the link is here. And just like a lot of the other dot zero videos, all of them, this video is just an introduction for some of the ideas, it's not a review or a final word, it's kind of always this assemblage of all of the participants here just modifying the slides and going down a few avenues that we thought were interesting and setting up some questions to discuss in the dot one and dot two. So we'll go over the aims and the claims of the paper and the abstract and roadmap and then talk about keywords, then go through the figures and tables. So if you want to participate in these group discussions tomorrow or the following week, then just let us know or save and submit your questions and get in touch with us if you want to participate. So we can just start with hello, and then it will be cool to hear from both of you what drew you to the paper or what made you interested in talking about it. I'm Daniel and I'm in California. So whoever wants to go first. I'm blue and I'm an independent research consultant in New Mexico. And I liked this paper. It was a really like refreshing kind of interesting and very readable perspective compared to a lot of the other papers. And it made some links with industrial engineering, which I thought was very interesting and kind of like a new take on active inference and FPP. Dean, I'm up here in Calgary. And what I found interesting about the paper is just looking at through the lens of engineering's, engineering's priors, I guess, in terms of what they would hold up as active inference, just because of what they do every single day. And again, that kind of juxtaposed with the social piece was really interesting to me because not to over generalize, but when you're focusing on how to break something down, you don't tend to tend to look at the whole. So I thought that was really interesting to try to pull those two things together and make them coherent. I agree. There's that discussion of the interfaces between the engineered system and the actual way it's used in the niche, which I think we're going to return to when we're talking about like a bookshelf. So the paper is the title of the link are shown here. And I guess maybe do either of you want to just read the first two aims and claims? Sure. So the principal contribution of this paper is to relate active inference theory to everyday practice in social organization. Active inference is a corollary of the free energy principle, which formalizes cognition in the auto-poetic organization of living systems that resist the second law of thermodynamics by occupying a limited repertoire of states, thus persisting as bounded self-organizing systems over time. Okay. And then I'll read the last two. Then anyone can give a thought. It is explained throughout the paper how industrial engineering and quality management better enable social organization to occupy a limited number of states in order to persist as a bounded self-organizing systems over time rather than dissipating into their surrounding environment. Although previous papers by others have encompassed active inference up to the levels of socio-cultural cognition, this is the first paper to address active inference in everyday organizational practice, including industrial engineering and quality management. So the paper is setting up to be just like one more plank towards applying active inference, bridging some of the practices that are already in use in the fields that we're going to be exploring like the quality management and industrial engineering, and then sort of the adjacent possible for what kinds of systems could exist in those domains. So sometimes we have a more mathematical paper or a more philosophical paper. This one's going to use a lot more of the jargon. And like Dean said, the priors of engineering. So that's why it's kind of cool to hear about it. Daniel, can I add something to that? Yep. Yeah. I just think it's interesting that I think we, to use a bridging metaphor is seems just to be a really easy thing to do. And then I think what the paper, in terms of the aims and the claims does is tries to point out where that connection might exist. And we can always argue whether or not it exists in all situations at all times. But I do think that there are some times, again, when we can hold a static connection between an idea at one scale and an idea at another scale. And so not all scales allow for that or have those affordances, but sometimes they do. So I mean that part of it, maybe what we need to do is separate the times when the metaphor works and when it doesn't. Agreed. Being utilitarian implies some sort of utility metric. Having a preference for reward or for preferred states implies those being defined, maybe. And also I think this will come up in this paper because it's all about which are the useful metrics to check up on useful policies to pursue. Okay. So does anyone want to read abstract one? I'll read it. Active inference is a physics of light process theory or perception, action and learning that's applicable to natural and artificial agents in the paper. Active inference theory is related to different types of practice in social organization. Here the term social organization is used to clarify that this paper does not encompass organization in biological systems. Rather the paper addresses active inference in social organization that utilizes industrial engineering, quality management and artificial intelligence alongside human intelligence. Social organization referred to in this paper can be in private companies, public institutions, other for profit or not for profit organizations and any combination of them. The relevance of active inference theory is explained in terms of variation of free energy, prediction errors, gender models and markup lengths. Pretty interesting. I think usually when we talk about these multi-scale perspectives on institutions, cities, societies, it's always like the entire edifice, all of the niche and everything that's moving in it is part of this one biological organism or at least biological co-regulating system. And here it's very explicit that it's trimming the biological component from the engineered component. It's kind of like the car and then the passenger. So then instead of just saying, well, the car and the passenger are the part of the same extended cognitive niche because that's one way to look at that situation like it's extended affordance that the driver has. Another way to think of it is by making a very clean partitioning, recognizing that there's still this immersion function like cars don't drive themselves, but you can still parse out the individual who's using the car from the design process of making the car. And one thing that's added to that, if you haven't read Lucy, and I always kill her last name, Lucy Sukman, she's got a book out many years ago that I think has still got some incredible shelf life. It's called Reconfiguring the Human-Machine Interface. And if you haven't read it, it's a fantastic book in terms of looking at how her case study is how people walk up to a photocopy machine for the first time where the engineers are trying to make it intuitive and how the human interface with that isn't always necessarily something that you can engineer for. So something to consider if you're looking at this idea. If I could just give another thought. I think that the uncoupling of social organization from biological organization is kind of a really like unique and fresh perspective compared to some of the other papers that we've looked at. I mean, especially considering that like, I mean, it is biological organization because it's happening in biological creatures, but it's an interesting like partition that the author creates to make it known that it's not talking about hierarchical organization in structures or societies. Yeah, so we don't need to worry about what's inside of the tissues and go into the biological organization. Okay, I'll read the second abstract and then anyone can give a thought. Active inference theory is most relevant to the social organization of work that is highly repetitive. By contrast, there are more challenges involved in applying active inference theory for social organization of less repetitive endeavors such as one of a kind projects. These challenges need to be addressed in order for active inference to provide a unifying framework for different types of social organization employing human and artificial intelligence. So I have a thought on that might be somewhat controversial, but you know, so I think that the highly repetitive so I mean I think about active inference in terms of exploration and exploitation. And I think like it really depends on on where your generative model rests, right? So like is the generative model like the model of the organization or is it the model of the individual because I think that doing a highly repetitive project that reminds me of exploitation, where like a one off project can be like exploration, like so you can still like update your model, depending on on exploring some new avenue by this little side project, but like you come back to updating your model the model of your business yourself the way that you work your organization with knowledge about about how to do processes in a one off kind of process. I agree, especially because we talk about it a lot in the context of generating strategic novelties, perhaps, and those are seemingly less repetitive. But I think there's going to be some further clarification on that. And also this points to this. Yeah, exciting areas like one shot learning or like true structural novelty and active inference models. That's something that we've talked about in other areas like how do you know when to be tweaking the knobs of the parameters as you currently have them set up versus expand a hidden state into two types of hidden state or something like that. Again, we're thinking about what the system is realistically or from this instrumentalist way. Okay, the roadmap is written out here. And it's a, we're going to go through it in rough order. It introduces in each of these sections, one of the kind of key ideas or principles, the variational free energy prediction errors, generative models, Markov blankets, and survival. And then in section seven, active inference is proposed as a unifying framework. So this is kind of one of those classic paper layouts with some disciplinary sections, one through N disciplinary sections. And then the next section is active inference as a unifying framework. And then the next section is like challenges or current research directions. And then a concluding section, maybe with a more call to action or more philosophical liftoff. There were a bunch of keywords, and I think we're going to get to all of them. So two columns of keywords. Let me shrink the video. Two columns of keywords. Okay. So first active inference. It's in the title of the paper. And here's how it's written in the paper. Who wants to read the large text here? I got it. Active inference is the physics of life process theory of perception action and learning. Active inference generates predictions. Active inference predictions are based on knowledge learned from past situations and perceptions of present situations. Active inference minimizes errors between actions that are planned through the inference of predictions and what happens when the planned actions are taken. So this is the first lines of the paper, right, in the introduction. That's how it begins with no active inference is the first word. So that's sort of the opening shot. And it's just a cool framing to bring it out as a physics of life process theory. Yet this isn't about biological organization. So it's almost like recognizing the origins and some of the motivations for the kind of modeling that active inference provides, yet making another later decision to partition the system of interest a certain way. So cool framing. And then the next lines. So kind of shrinking that definition of active inference. This is because prediction errors cause unwanted surprises, which individually or accumulatively can threaten survival by going beyond the limited number of states in which survival is possible. By contrast, minimizing prediction errors facilitates survival through least action. So this is also linking up several of those key terms that are going to be section headers later. But it's just this idea that that predictions about the external world, especially if they incorporate preferences that are optimistic, help reduce the complexity of deciding which actions to take. And then the way that that gets operationalized is with the process theory of prediction error minimization, which is a real time estimate of how good the generative model is performing. And then just add anything or say more here, but it's just an interesting topic how it's described as the principle of survival through least action. So what is this principle of least action? And how do we reconcile active biological agents as being in one way participating under this survival through least action, even if it's just a misnomer, what does that really mean? And then how does survival through least action contrast or overlap with more familiar phrasing, probably survival of the fittest, like is the survival of the fittest, the survival of the least active, or how should we read this relationship between maybe Darwinian conceptions of fitness and which individuals survive versus this physics-based framing of survival through least action under the principles of least action, stationary action principle. Can I add something, Daniel? Yeah. Yeah, so I think it's interesting that you, depending on how you define each of these terms, that you can sort of transfer or oscillate across frame what a frame is, which to me is identifying a focal point, maybe a margin or a limit, and what a fit is, what goes inside the other, like glove and hand or hand and glove. And I think sometimes we talk about a niche fit, but that's different than what the frame of the niche is. So the fact that we're sort of ping-ponging back and forth between the two, I just wonder whether or not we're looking at the same thing or if we realize we're looking at two things at once. Nice. Continuing on with the introduction, it's written, active inference is a corollary of the free energy principle, FEP, which formalizes cognition of the auto-poetic organization of living systems. Within FEP, active systems must occupy a limited repertoire of states. This requires minimizing the long-term average of surprise associated with sensory exchanges with the world. Minimizing surprise enables them to resist a natural tendency to disorder. So this just kind of relates to some questions like, how do framings for this sort of active, intelligent behavior, what other framings exist, and how do we think about broad concepts like reward, preference, and curiosity, and surprise in active inference? And then the last paragraph then, any thoughts or not? Surprise rests on predictions about sensations, which depend on an internal generative model of the world. In particular, although surprise cannot be measured directly, a free energy bound on surprise can be suggesting that agents minimize free energy by changing their predictions about what sensory inputs will come from actions, or by changing the predicted sensory inputs through changing action. This just reminds me of the kind of fundamental pairing in active inference where the ways that you can go down that free energy gradient are either by updating parameters of your internal model, learning, development, or by updating action states, which is through action, because you can't control external states directly, and you can't control the incoming sensory data directly. Then all you can do is control those two states that are under your control, the internal states and action states. All right, another term was the Markov blanket. Anyone want to just give, what's the Markov 101, or what's the way in which Markov blankets came into play in this paper or in another context? Markov blankets are essentially the boundary or how a system creates a boundary and how you can define essentially a non-equilibrium steady state system with a Markov blanket. I thought it was an interesting play in this paper, and I think maybe we'll unpack it here later on in the intro video. We just, in number 26, talked a bunch about Markov blankets. So suffice to say that there's, it's still open exactly which cases, which sense of the Markov blanket is meant on this continuum from analytical and mathematical with Markov, the OG blanket, the Bayesian statistical generalization by Pearl and others, and then this partitioning into incoming sensory and outgoing active states of the blanket by Friston and others, and then how that exactly relates to all these other topics that we're discussing. Okay, I'll play the video at 46 seconds, Blue. What was exciting to you about this? So this is the video from the Tesla AI day that just happened last week, and this is, it's a humanoid robot that Tesla is going to attempt to design. And so I just put this definition of artificial intelligence from like Google, and it said the theory and development of computer systems able to perform tasks that normally require human intelligence such as visual perception, speech recognition, decision making, and translation between languages. And so spoiler alert, this is actually not, not a robot. This is actually like a person, but, but he made it look like a robot that was getting up and then like, bam, I'm going to dance. But I think like that kind of movement and perception about, about space, that kind of spatial awareness is something that's still very much a challenge for AI. So, so that's why I threw this video in there. Agreed. We'll be interesting to see how they do processing and what kinds of algorithms and hardware and software end up being used for real time robotics systems. Any comments on artificial intelligence or, or anything, Dean? Again, I don't spend a whole lot of time on this because to me, artificial intelligence is always a, it's always a post hoc domain. Even this example, can we create a copy of this? So, I mean, it's already been done by something and now we're trying to re-engineer it or redevelop it as opposed to what active inference is, which is the inclusion of ad hoc, the what if. So, I mean, if they can get artificial intelligence to a place that it's, it's including what if in that, in the truest sense of all possibilities, then I'd probably be quite excited or scared, I'm not sure which. Nice. So, some other keywords were social and organization and one of the goal statements of the paper was, this paper is intended to contribute to bridging the gap between theoretical papers in active inference theory, which have been reported to be too opaque to be understood widely, and potential application opportunities in the multi-intelligence social organization, that being social organization that employs human intelligence and different types of artificial intelligence. So, and whoever put this image, what did it mean or represent in the context of the paper? So, I just looked, so I didn't, I don't know that it's necessarily really in the context of the paper, but I started to look into like how are societies organized, right? And so this is like, I just was doing some googling and I thought that this was an interesting paper and the title and the author there in the small print if you're able to read them, but this is like a state organization, right? And it's, you know, describes the fascist media model as focusing on community and culture, and it describes the communist media model as focusing on state bureaucracy and the liberal media model as focusing on market exchange, which I just, I just thought was interesting and just kind of an interesting way to, to, I don't know, reflect on societal organization and what, what is the central theme there? And it's kind of in line with that partitioning of the kind of the car from the user, because not included here are like the human users. This is kind of everything but the eyes on the TV. This is just partitioning the, the structure of the organization as an engineered object. So that's a cool similarity. Yeah. So we just had a guest stream and Pranev and Raphael, even they were debating, can we, can we say that's something that's an emergent property in an individual, which is then something that we can test as an emergent property within a dyad can also become an emergent property within these high variability states. And even they couldn't agree that there's, there's some sort of a, a parallel. I think we know that on these high variability states, like a social organization afterwards, again, we can, we can take an account of what it became, but we, it's really hard to know where it's going, even if you break it down into a the communist or a, for a fascist direction. It's telling us, like you say, what, what direction the high, down the highway, the car is going, but that, but beyond that, it's still a lot of guessing. So I guess that's where industrial engineering comes into play for that designing for that car on the highway. Industrial engineering involves applying methods such as task analysis and job design in order to predict results, evaluate results and improve results from processes during their development. The application of industrial engineering has been progressing around the world since the 1900s. So it wasn't an area I was too familiar with. I saw a lot, a lot of mention of the 99 out of 100 and the three standard deviations and then halt the process. So that kind of just was an interesting thing to hear about. Any thoughts or we can continue on? Okay, as I guess a related field of industrial engineering is quality management. And so this is a type of system that involves, I guess, hardware and software and behavioral protocols. Quality management systems, QMS involve documenting processes which have been developed through industrial engineering as process specifications, work procedures, etc. Monitoring processes for conformance to specifications, etc. And using observations of non conformances to inform the further development of processes. Both industrial engineering and QMS are focused on the continuous improvement of processes. The application of industrial engineering has been progressing around the world since the 1900s and quality management since the 1950s. Any thoughts on that? Just having been a business owner, it's a process. Yep, definitely makes me think about precision of measurement changing and then what can be measured and what can be measured gets entrapped and then what can be regulated becomes gamed and all these sort of complex ways that assessment of performance fits one into local versus global optima. You can also see how instructionalism got systematized. Yep, if there's a way to do it, and this is the way that's approved, even if the later becomes like an improvement in the protocol or some sort of change in the context, maybe it can't be utilized because it's not part of the approved version. And so there's no simple answer there because you can't just say, well, okay, accept all changes. And then if you just say, okay, we'll have a small bar for making edits, it's like, there's no perfect trade off point with the false positives and the false negatives, especially for situations that truly are ambiguous, which is why we fall back to these approaches that put us at the kind of functional inflection point of the pragmatic and epistemic outcomes, which is like active inference, rather than just pursuing reward over short term or over some other timeframe. But common to all these different frameworks for control theory is basically statistical process control. So maybe whoever made this slide was it, yep, blue, go for it. So statistical process control is a collection of tools and techniques used to measure and analyze process data in order to characterize the behavior of our processes and achieve process control. And so in the paper, I don't know how much we'll get into here, but the paper really describes statistical process control and gives a figure similar to this. So you have normal process variation, which they described as six sigma deviation from whatever is optimal. So there's the optimal, which is like the green line, and then three segments to the up and three segments to the down, as to like an over or under estimate or of your, you know, target. And then any time that you have a process that deviates from the six sigma range, you have the special cause variation. And so like in like the business schema that I'm familiar with, at least, which is minimal, but some, you know, you have you do this root cause analysis, like why is there this special cause variation? Like what is what happened in that, in that one circumstance, we have to go and analyze that instance to find out why it was outside of the six sigma deviation, because it's just like that's an unacceptable data point at that point. Nice. Good explanation. How about thermodynamic entropy? Also, this was also me, but it's a quantity representing the unavailability of a system's thermal energy for conversion into mechanical work, often interpreted as the degree of disorder or randomness in the system. And so, you know, there's the law of thermodynamics that says that entropy in the universe is always increasing. So becoming going from more ordered to less ordered or so from highly ordered to more disordered, which this kind of shows in this example, things spreading part. And like it's like you would, if you left town for two years and you came back and found that your house was a mess, that wouldn't really be surprising. Like even if you left it clean, even if, you know, I mean, just things deteriorate, fall apart and dust goes in and whatever, rodents, creep around and stuff like that. And it's that apparent and empirical local organization of matter, like into bones and flesh and life and whatnot. And also the always and only co-occurrence of that kind of local organization of energy with cybernetic or anticipatory behavior. That's what motivates the kind of framework that is going to, on one hand, work for particles, but also work for active particles and all of the continuum between. So that's why there's all these different cognitive structures as well as physical structures that are being linked together through basically physics. And by hooking to free energy principle and variational methods, and all these other areas of math and physics, then it puts active inference just like one step closer towards embodiment on some of those different pathways. Okay, information, theoretic entropy. So this is the Shannon entropy that's often referred to because it was, I think, first described by Claude Shannon. But I always put Maxwell's demon here when I'm thinking about relating thermodynamic energy and informational entropy. And I think was it Alex Kiefer wrote a paper that really we did in one of the live streams that really kind of outlined how these are connected with the Helmholtz distribution. And so if you really want to dig deeper into these relationships, that's a good way to do it. But I always think about, you know, so Maxwell's demon is the idea that, you know, if you have molecules like shown on the left here in, you know, this box and you have two sides of the box and there's a partition in the center and like the, you know, they're cold and hot molecules. So if you knew which molecule was hot and which molecule was cold, so the demon is sitting there who can see is the molecule hot or cold and only lets the hot ones go through to the right and only lets the cold ones go through to the left. And every time he opens the partition, effectively that is work. And so just kind of conceptually, that like shows that the relationship between information and work, the more information you have, the less work you have to do maybe. So that's kind of the relationship that I think about in my mind. And then there's a formal definition of Shannon entropy down below. But it's usually given in informational context as a measure of surprise, as we've talked about on the live stream many times. And they relate information as negative entropy. So it's the reverse, surprise is the reverse of surprise, or sorry, information is the reverse of surprise. Cool. Thanks for the explanation. Dean, any thoughts on that? Yeah, one of the things I, I mean, I love this. Thanks, Blue, because thank you for that explanation. It's really, it's really good. So one of the people I'm reading right now is a guy by the name of Dan Siegel and his book about the mind. And what he talks about is when people are at dis-ease, so not an easy state, but when you're, when they're struggling with something, he talks about the fact that there's one of two ends of a continuum, be it blue and red, in this example, one end is chaos and the other is rigidity. And so from an information theoretic entropy state, are we stuck? Are we bored? Are we feeling trapped? Or are we being overwhelmed or underwhelmed? Meaning we're, we're, we're not in that in between state. And so I think that I think there's a lot, there's a lot that can be done if we get a basic understanding of Maxwell's demon without sort of getting stared off by it. Cool. It's almost like, yeah, what would these particles and their temperature represent if they were, I don't know, ideas or something more related to inference rather than just particles? Because we're always looking at that intersection between the physical systems and the informational systems. All right. Variational free energy. So this keyword was defined in the paper as the difference between predicted sensory inputs and actual sensory inputs when monitoring processes. VFE increases as the difference increases between what is predicted to happen if actions are taken and what does not happen, what does happen when those actions are taken. The more prediction errors there are, the more thermodynamic energy will be consumed amidst the thermodynamic entropy of remedial actions, corrective actions, restorative actions and firefighting. This will leave less thermodynamic energy available for doing productive work, comprising productive actions. Thus, if the variational free energy is high, then thermodynamic entropy, e.g. energy not available for doing productive work will be high and the thermodynamic free energy, e.g. energy available for doing productive work will be low. Hence variational free energy can have an inverse relationship to thermodynamic free energy. This is because the bigger a prediction error, which can be considered in terms of bigger VFE, the smaller can be the thermodynamic free energy available to do productive work. So this is a pretty interesting passage. Did either of you have a thought on it? It's exploring this thermo info nexus in a little bit of a different way and on one hand it's kind of confusing that the variational free energy is going to be the opposite of the thermodynamic free energy but it's almost like what is happening here and to see that they have a negative relationship it's like if the engine were perfectly anticipatory and super efficient it would be carrying out a ton of work. If it's moving in a super spurious heat producing way then the product, a lot of variational free energy, a lot of surprising observations relative to functional operation of the motor, then your capacity to perform work to capture those very noisy vibrations is low and so a lot of the underlying potential becomes dissipated rather than used to perform kind of useful inference, useful action. So not sure if that's a correct mapping but I think it'll be cool to talk to the author because this is a very fascinating idea about how different formalizations of free energy actually relate to modeling processes. Yeah, Blue. So it's like I really thought this was kind of what made me think of Maxwell's daemon right? So it was interesting because you know the variational free energy when we talk about minimizing free energy we always think about like as minimizing surprise right? So if we minimize surprise then we don't have to do all this thermodynamic energy like the firefighting that they talked about and I think that the paper when we get into the five wise and that figure I think we'll elaborate a little bit more on that but the it's just interesting because we don't have to output as much work and that's kind of what made me think about Maxwell's daemon and kind of why I snuck that in there. But yeah you're right I don't know like you know I think that the relationship between infodynamics and thermodynamics is still very much being defined there's a lot of very recent research that's looking closely at how those two relate to one another. Nice, Dean. Yeah I just I immediately said to myself I really want my autonomic nervous system to work and I don't want to have to think about it because a lot of the mechanical things are dependent on that so I'm not I'm not sort of pulling apart what the claim is here but I think if you read that you're kind of saying okay a lot of the surprise you're not even aware of because your autonomic a nervous system is functioning incorrectly. Great point there it's almost like a surprise variational surprise does not equate to attention or even especially the conscious awareness of attention in a human psychological situation so it's totally possible that like the difference in pressure sensing depending on your body's posture that's completely normalized out that's recalibrated for the expectations of body position without coming to our attention unless we bring it to our attention so that and many other similar examples. So Dean I love how you're like not able to to stay outside of the biological partition. Well I just I'm not again I'm not I want to be able to create with this stuff I mentioned that and so to me I think to willfully ignore that is hard for me to do I'm not saying it's not possible I'm not saying it's it's the right thing to do here I'm just struggling to let go of it I want to let go of it but I'm having a hard time with it. Well and I think for all of us you know the urge to apply to every system is very strong. Right here's table one so here's the mapping of a bunch of different sets of terms from active inference constructs and then thinking about how they're applied in industrial engineering and quality management practice so variational free energy which we just we're talking about is the difference between the predicted sensory inputs and actual sensory inputs the variational free energy upper bound as well as its lower bound is basically the range in which that system is working well it's like the loaves of bread are within the calibration range and we can just keep on pumping them out precision weighting has to do with which aspects of the process are monitored with which what level of precision for example which width of a part has to be machined in fractions of a millimeter versus other tolerances and then the expected free energy is the expected energy between the preferred sensory inputs from process operation and the actual sensory inputs so the variational free energy is the difference between the prediction and the actual so you have some generative model of your data center and then you have the generative model of the temperature through time and then the comparison with the actual but here it's about the preferred versus the actual so that's like we recognize that it is warmer than we would like it to be which is why we're conditioning our policy in the long range towards getting back to that zone but for now we need to also have an accurate sensor measurement so that's also a nice juxtaposition that's not always finessed in other papers the neck and all these keywords the first set were about free energy and precision and how these terms are calculated relative to one another the second set of terms in table one is some of the key terms of Bayesian statistics so just to read only the constructs and these are also really helpful because we're as a lab thinking about what are the core terms of active inference what are the definitions that are general enough to be really educational but also not too limiting or prioritize one sense over another for example so the Bayesian terms that would be found in any other course or textbook on Bayesian statistics prior expected posterior actual posterior preferred posterior and prediction error so that kind of puts the predictive processing as a under the umbrella of Bayesian statistics okay the next set of terms in the table was related to this dichotomy of two different types of inference so epistemic inference was defined as updating the definitions of process during process improvement based upon analysis of non-conformance reports whereas instrumental inference is updating which actions will be included in the process during process improvement so this is sort of like epistemic knowledge based this is about updating the knowledge based upon especially the times where things don't go right and then this is like policy inference action planning as inference because it's like instrumentally it's actually like about what the instrument is going to do so and then the last okay any thoughts on these terms okay then the next set of terms in table one was Markov blanket internal state and external state so definitely check out number 14 number 26 a few of the other live streams to learn more about the Markov blanket and this is a figure from uh Dacosta et al 2021 the blanket is here the boundary state between the internal state of the social organization so those are the ones and the and the external state of the environment when operating processes are defined in the quality uh management system then what are those internal states those are the social or organizational processes developed during industrial engineering for minimum risk in the process environment and then the external state is the environment so that's kind of mapping the Markov blanket kind of the car the interfaces and outside the car or maybe that's not the perfect mapping but I think that's kind of what is being suggested and then the last few terms of table one are four of just the general constructs general principles that are also section headers and important topics risk ambiguity principle of least action and principle of survival dean yeah so I think all of this ties in to a a way of looking at at the world through through rows and columns which is fine I mean it's a good it's a good place to try to look at things and analyze them but not everybody not everybody creates off of that because not everybody is necessarily looking for a precision forward view sometimes you're you're experimenting and you're making all kinds of mistakes and learning from that as well and so you again you just want to just want to see this for what it is it's it's nothing wrong with it but it's not it's not the only way that's all blue anything okay so then there's the figures and these will recall some of the key terms we talked about earlier uh the text says as shown in figure one spc charts these statistical process control charts have upper and lower control limits and processes need to remain within these limits processes that go outside the limits do not meet specifications that define intended states so left side good top of the drake mean right side bad bottom of the drake mean not working okay figure two is this five wise so it's written out here more texturally if you want to pause it and look at it and here it's more graphical so either if you want to give a thought on the five wise or okay well the scenario that's laid out is uh kind of working backwards from why there was a not reliable organization and the five wise it's actually not related to Aristotle's four wise but that would have been a nice opportunity maybe there's a four by five matrix but these five wise are referring to um needing to go back kind of five layers or like five derivatives up the chain asking what led to a high reliability organization being unreliable so the general case of this sort of cascading failure in terms of variational free energy so the general phrasing is that there's high uncertainty high entropy in work instructions so the uh ikea instructions are not accessible enough to many people or the radio is really blurry so people can't hear what letters are being said and then that makes the variational free energy high because of prediction errors which then leads to unproductive actions so then you're you're scanning on the radio because you can't quite lock on and then that's consuming energy and then that ends up reducing your battery so that you can't do what you actually need to do and then the case in this truck driver incident is that the truck driver is getting ambiguous logistical data so this itself is a policy communication from like you know some cloud mapping service or the company that's contracting the trucker and then this leads to the driver having increased spatial uncertainty getting lost which leads them to drive around unproductively so they don't have enough fuel to efficiently complete the deliveries that are imported and so here's the graphical layout of the five wise showing with that example how on the bottom is the truck drivers getting the bad info and then that surfaces after sort of several repercussions in terms of the uh the um structure of multi-level decision making and active inference how we can use this framework which maybe is used in industry and map it onto the states of decision making from active inference perspective but notice that there's no reward here it's not well drivers uh minimize their reward about this or you know risk on this or maximize their reward on this so it just shows how this sort of um framework has a different framing than people might expect if the only machine learning model they were aware of was machine learning and um specifically reward reinforcement learning dean yeah and it isn't just industrial you can you can be like me and and jump in the car with two of my first two are both emergency doctors and you're heading off to the ski hill and at some point the conversation comes around to there are no accidents in the world and of course being emergency doctors they see the the outcomes of of events happening to people and so their perspective is there's always a way of being able to go back and see all the parts of the decision branch and how it arrived in front of them and them having to sort of put the pieces back together again but again i don't i don't know that necessarily that that means that randomness doesn't come into play and that there isn't a certain part of this where the variation simply can't be um calculated in but i still go skiing with them i don't i don't know just just dislike them or de-friend them so anyway yeah um it just reminds me of like when i dropped a hard drive and then i thought well i guess this is the one in 100 hard drives that fails every year yes but it was also the one time that i dropped it so there's always this sort of like n equals one way to uncouple uh cause and effect or potentially uh actionable consequence to the not reliability of your organization but then also there's this designing for the statistics when they decide to make a recall or not based upon you know the relative amount of failure versus the relative amount of you know legal uh changes needed something like that so that's where you talked earlier about including co-occurrence instead of just only giving time over to the sequential aspect of this yep so figure three frames the quality management systems the qms the quality manual as a generative model comprising beliefs about processes so it's interesting because it could be seen as just a cookbook recipe book instruction book but it's being at least deployed in a generative capacity and instead of just like saying you know whip the eggs for five minutes and then until they're frothy or you know until they are this make peaks it's like we expect it to make peaks whip it until your expectation is achieved it's a different way to get to that same outcome now which one's going to be more resilient if it says five minutes and there's no peaks then what do you do do you add something do you keep on going whereas if you have a precision seeking process maybe you can be resilient to it if it only takes two minutes and you don't over do it and the example of something being correct like 99 out of 100 times is just used I think just repeatedly in the paper just it could be calibrated differently and I'm sure it's situational it just sort of like shorthand 99 out of 100 but yeah it is defining these QMS in terms of the active state so what should be done you know pipette 50 microliters of fluid into the little tube and then you monitor something about the process and then that leads to this feedback cycle between the quality management review and then that being in kind of co-evolution with the industrial engineering itself and then that leads to the update of the process in the quality manual in step four so that then there's kind of like okay if it's too low after your first pipette then do a little bit more pipetting and then they keep on protecting that 99% functional outcome at each of those nodes that are bifurcating out okay any thoughts not going to read this whole thing but the top part talks about this design for assembly and one of the principles is to make parts either symmetrical or clearly asymmetrical so nothing in the uncanny valley this is done so that it is immediately and unequivocally apparent how a part is to be put into sub-assembly and this is the famous IKEA catalog aesthetic and how that made furniture building like fun and accessible for some people I guess and then in this second quote it is kind of thinking about that IKEA paradigm although furniture companies may not be considering their industrial engineering in terms of minimizing variational free energy it is apparent that its aim is to minimize errors between what it intends to people to see when looking at its furniture kits and what people do see when looking at its furniture kits so it's sort of like it's the goal should be to make it look like it looks on the sale room floor or like it looks like in your imagination after you see the advertising and then if you can see those flat pieces of wood and you can see that instruction booklet and you can see the policy then you're going to be not surprised you're going to be satisfied because your preference was to have like the couch there so just kind of a cool example that highlights how our preferences aren't just over temperature we often talk about those kinds of examples but we can also have preferences for having affordances or niche modifications like we can have a preference to have a handle on something and then that kind of relates us to this like behavioral economics underactive inference like what makes somebody expect there to be a new couch in their living room and then what makes them drive out and pick it up on the weekend and everything and then it's this kind of idea that the supply chain is this extended uncertainty reducer and it's like bringing stuff to you and then there's this notion in this computational industrial case and in the at-home case where the assembly information that's provided is a policy decision from the people who are making the product and then that really relates to our like participation in making that product if it's all pre-compiled then there's no affordance for participation versus having like a culture of people working through it from many different ways so and then this was just the funny example of the interface which said although there may be little risk in furniture assembly this can be followed by much ambiguity in furniture installation so in other words it's not just about what's on this brochure because predetermining all the possible prediction errors in fixing the assembled furniture to walls in millions of different buildings around the world is very challenging and so there's this difference between the assembly information like how do you just assemble the system of interest which is just the internal and blanket states of the new shelf what's the difference between that information how to build the bookshelf and then the operations like oh well you didn't tell me that I couldn't kick it from the side it's like you just told me how to build it you didn't say that I couldn't use it this way and then so how do you design for internal coherence versus designing for interoperability with sort of unknown unknowns and all these different types of entities that the system's going to interact with on the blanket on the interface so it's not just the IKEA booklet but I thought that was kind of an interesting notion that even just building it wasn't really the end of the story because then people take their perfectly built bookshelf and then just like staple it to the wall the wrong way like ruins both um so Dean yeah and then and I don't know if you can see this but then how do you account for this how do you account for somebody who took a piece of olive wood any random piece of olive wood and turned it into something like this right so it's not there's no there's no billy there's no way of deconstructing this you just have to pick up you have to pick up something that is a mark off blanket it's it's it's pre-card and then you have to be able to pull out what you think is in that and how do you how do you engineer that well it's it's a completely different process so I think I think we're I think as people we're capable of doing both and I and I just I'd love to talk to the author about how how you how you do this kind of work when there's certain results that are are available at our part of the active inference model so it'd be nice contrast tomorrow yeah yep nice good point like where's artisan ship and craft in this phrasing so here's an example using that uh case of the like bookshelf or the cupboard being bolted to a wall and this also gets at that partition between the part of the system that you can design and then the part of the system that is not being considered so here the internal states and this is kind of interesting it'll be cool to see who is there alive to talk about this sort of embodiment aspect the internal state is the person's embodied prior knowledge of fixing assembled cupboards to walls and depending on what they know about you know carpentry if they're just a totally watching the youtube video and then uh letting it rip versus if they're experienced they're gonna with different materials they're gonna take actions and that's gonna modify the niche which is the external state the room to which the cupboard will be fixed and that's going to be reflected in what the person perceives and then it's just cool like here maybe there's multiple ways to interpret the center it's like there's the wall blocking like external blocking state then there's like the affordance block where the tools are inadequate like you know you need the the hex wrench of size seven and you only have the size four so you just can't do it even if you know how to do it and there's times where even if you have the right tool it's like it's too rusty to do it so there's like it's just a cool example and shows a little bit of that that horseshoe theory again between philosophy and engineering where they're both radically focused on embodiment and pragmatics okay then the next section after four is seven active inference as a unifying framework and so basically they write an important application of active inference theory can be as a unifying framework for a social organization framework summarized in figure five which we'll look at in a second operated through quality management systems qms via interoperability protocols at an upper level of multi-tier enterprise architecture so before we look at figure five it's just like how does active inference serve as a unifying framework in some of these areas that we've been talking about for the last months like we've heard people use this integrating word to talk about active in action and perception engineering biology communication cyber physical systems remote teams etc so just it's being used a little bit in a different way here so let's see figure five and then you can give any thoughts on it here organizational active inference is again being mapped onto this sense internal action external loop this kind of circular loop the internal states are the organization's belief just like in figure four it was the person's embodied beliefs here it's the organization's embodied beliefs embodied in the qms which is again being framed as beliefs over processes and states that results in human and artificial intelligence employed to fulfill specifications procedures etc so these can be actions that are taken by apis or bots or by people this acts upon the external states which lead to perceptions of sensory states which could be the temperature in the room it could be the the ph of the wine it could be the number of sales on the course so then they do industrial engineering on this loop as an organization in relationship to integrating perception and action through internal state updating pretty much just like we've been talking about for all these other active inference systems any thoughts okay table two writes there are many there can be many reasons why pragmatic subsumes semantics and leads to prediction errors for example inept priors and inept preferred posteriors can come from lock ins path dependencies and success traps which place emphasis on past experiences over new information individuals may seek to survive by learning from filtered information that supports the preconception of groups which they believe can support their individual survival rather than seek information that supports the efficient reduction of prediction errors in external states so it's almost like you can't prefer what you prefer so much otherwise you're gonna um i don't know maybe follow that local band all the way to a local end instead of sometimes coming out of that local organism uh local local attractor and any want to go for this blue yeah yeah just it seemed like just confirmation bias right you seek to survive by learning the the information that supports the preconceptions of groups right so that's like you know i believe this my my can can my group believe this like i don't know that's that yeah this is a cool table it's like okay so in the perception phase of active inference you can have a top down error based upon inept priors so you can believe the wrong thing about what you should expect to see or you can have the sensor be influenced interesting about e.g. emotion but um that's i guess being phrased as a bottom up way that perception can be modified and then in the intermediary intermediate it's bounded rationality so not that's it's kind of like a compromise between what you think is possible and magical realism then the action phase also can be run um aground by inept uh focusing on basically poor outcomes like if you're aiming for five degrees off of shooting a free throw then you're going to miss the free throws and then that also your attention could be affected by poor signal to noise ratio like if you couldn't see where the hoop was and just interesting how like it's being really methodically broken down the sources of where the error can come from and then also um in table three there's a nice distinction and it relates to that notion that active inference would be uh potentially very effective for repetitive processes but potentially not as effective for one of a kind processes and so the repetitive processes are like mass production and then the one of a kind processes are like the project production so right off the bat it's kind of like well how individual does a project need to be before it's not uh uh best modeled by active inference um and then it just talks about how markup like its generative models prediction error and variational free energy play out in both of these two areas so that would definitely be something for people to like read through the table and then mention it and give a specific example or question to the author because this is like super informative and a cool combinatoric way to look at how ideas are related so yeah dean and then blip so when i'm looking at this and i don't know if this is helpful or if this just makes it even more turbid but if one hand of this is the ability to frame something so that we know what's in what we're looking at versus what we're not looking at and if the other hand is how we match essentially if you go back to table two and this table three and and the figure the matching is what's superordinate and what's subordinate what's inside of what those are the two hands that still doesn't tell us what is going on in between which i think is the bit piece which from a from a thinking standpoint is am i fitting now hand in glove meaning is is my is my glove size a size large or is it glove in hand does that does the glove i need serve the purpose because i'm skiing or because i'm i'm i'm forging or i'm or i'm water skiing right so between framing and and and matching is this fit question they're all three of them are different things the fit is in between these other two things and even if we make a great table and even if we make sure that we got the right headings and right definitions underneath it still doesn't talk about is it glove in hand or hand in glove and i think for people who want to actually use active inference they want it able to not get stuck conflating fit with frame or conflating fit with match they want to put those two parameters the the frame in the match on the outside and then see what happens in the middle based on what those parameters are i think that's the struggle for most people trying to gain entree into well how does this active inference actually work it works different all the time that's the point of the markoff blanket it's the potential to be able to invert at will that i think laying it out this way it helps it gets you like 90 of the way there but it doesn't really tell you what you couldn't do with it as opposed to what just happened very interesting dean blue so like just to kind of this piggyback's right off of what what dean was saying like the the markoff blanket and even here in the first the line of this table you know the markoff blankets are hard when they're shifting right and this is something that we've talked about in other live streams in other contexts like shifting overlapping my markoff blanket interacting with your markoff blanket i think that you know the underlying mathematics to support those kinds of arguments are kind of missing still or not not missing but still being worked out is probably more correct but but i do think that markoff blankets do change i mean even if i just think about like my brain as you know markoff blanket like cells die cells are born and i know that that's very controversial in the neuroscience field but but cells are actually born so so i mean there's there's constantly this this turnover of who belongs inside the markoff blanket and any kind of you know organizational structure has that shift it's not like the boundary of the markoff blanket it has to be has to retain some kind of flexibility and and so that you know is is seen here in this in this first line and also whose generative model is it like so this is kind of another something that that i was like no so the quality management solution or the qms right quality i've maybe got the abbreviation wrong but um that that is the gen that is the generative model but like who does it belong to is it does it belong to the process does it belong to and this is maybe my like deficiency in my understanding of industrial engineering because i'm just not familiar with the jargon but like is it the process against the quality management solution or is it the organization that has the quality management or is it like some kind of subunit is it a team so i'm just kind of unsure about they they talk about this is the generative model but like whose model is it and like so depending on on how that's answered and we'll have the author on so we can ask them but depending on how that's answered you know these one of the kind processes as i said earlier kid can maybe you'd be used to update the generative model in a very um explorative kind of way nice so here's just for a few uh implications and bigger questions so here's just a quote markov blankets are not features of real world systems rather they are an intuitive post-hoc descriptions rather they are intuitive post-hoc descriptions made in order to model real world systems hence any description of a markov blanket is subjective and can be changed over time as the modelers focus changes and or as real world systems change in the opinion of the modeler thus as summarized by phrases such as the map is not the territory markov blankets da da da da da da and then also it this is a funny line it has been argued in relation to markov blankets and other important constructs in active inference theory that the math is not the territory wherever you heard that one before but this is like kind of the ultimate instrumental take it's it's making a clean distinction between map and territory between the the utility driven use of active inference and claims about what the system really is and then um uh this just the last line but it's an interesting quote but i'll just read the last line is um move to the slides hence the description of markov blankets could be a recurring challenge throughout project production which may only be resolved by reference to decisions made in courts of law long after the project is completed just maybe think about i don't know maybe like the bay bridge being built in the bay area in california and like you know years later and it happened differently and i'm sure some companies were bought and sold and all the morphing of the organization and the physical modification of the project and the the personnel changing over all of that which is sort of this on one hand it's an n equals one unique enacted situation but then on the other hand there's principles for engineering it well and uh sometimes the process is itself wondering about how it's organized and then sometimes you can't even know until it's after it's done so that's just kind of cool and funny another quote uh this is sort of taking a uh take on philosophy from the engineering perspective exactly how beliefs are updated in embodied cognition is a topic of ongoing research which involves specialists mathematics to describe relationships between prior probabilities and posterior probabilities by contrast updating beliefs through active inference in industrial engineering and quality management involves applying well-established practical methods in particular planning what should happen by active inference involves developing processes through applying industrial engineering techniques so it's sort of like we know what our policy affordances and uh mechanism of updating our affordances are that's our discipline and so more broader question is what are the implications for generalized frameworks like embodied cognition or active inference what are the implications when they interface with disciplinary or project specific like one time work and then how does active inference get simplified in the context of industrial engineering and how does work in the disciplines become linked in a bi-directional conversation to the broader body of knowledge just general questions any thoughts or okay Dean yeah go for it Dean you're muted yeah I can't hear you Dean oh yeah go sorry yeah continue oh my god um whether we're talking about the Bay Bridge as do we have the proper glove in hand or whether we're talking about instrumentalism do I actually have a precise fit of glove for my hand yeah I think I think what what we have to be aware of is that both of those questions come up post-hoc I tried on the glove and it worked or maybe the design of the Bay Bridge should have been different given the the contextual circumstances all I'm all I'm curious about is so before the Bay Bridge is built before I fit one of two questions into that glove hand relationship am I aware of how that can be seen post-hoc or ad hoc and am I am I am I conscious of the fact of where I'm taking this active inference model and as Lou pointed out earlier who's is it and now just as importantly when is it when is that when is that question being asked in the context of what the product is because if we're going to talk about social organization as the product active inference allows us to look on both sides of that so if as long as we're including the when in terms of just discovering fit I think I think we're actually using active inference in terms of what its potential is that's all nice little philosophy like we I'm not like we need that but I'm just throwing it in there yeah it's cool so then here's another quote in terms of the explore exploit dynamic project production involves much more exploration than exploitation hence more wayfinding than navigation is involved in particular wayfinding involves the ability to create novel routes which are based on understanding a wider frame of reference than navigating along a preset route wayfinding involves creating novel routes through changing situations by making non-conscious reference to subjective prior knowledge and conscious subjective reference to current situations and semantic information that's pretty dense but pretty fascinating because it does point at this kind of like repetitive then you can just write out the directions and follow them in a little bit more of a rule following way and then the extreme case being like a train like the cow train just went from San Francisco to San Jose just went and it was like it had you know the affordance to speed up or slow down but then when there's the bushwhacking or the wayfinding here then there's almost like novelty of path and then there's higher novelties like introducing a new tool or climbing to the top of a tree to see further but there's just this path finding novelty that is really important for you know all kinds of these applications so just raises the question that'll be cool to discuss about like how does active inference juxtapose or interface with sense-making and wayfinding explore exploit the pragmatic epistemic distinction and then the way that the models are fit with the accuracy minus complexity just how does that come together in active any thoughts or good yep Dean yeah because because wayfinding is my is my niche I was just so happy that he pulled these two reference like if you haven't read 90 like work of 95 and 96 here I strongly recommend it because he's pulled out two fantastic references in terms of things you could look at in terms of what that exploration piece actually is that they're both fantastic especially the 95 yeah I read both of them and they're both they're both excellent examples within the context what he's trying to talk about in terms of implications I just wanted to do a clap for that one because I thought glad glad he brought the wayfinding piece into this cool yeah the 95 is finding the way a critical discussion of anthropological theories of human spatial orientation with reference to reindeer herders of north eastern europe and western Siberia 2009 and then the second article is humans use predictive gay strategies to target waypoints for steering 2019 so it's kind of interesting because um and especially having adapted a visual foraging model to a stigmer g pheromone deposition model in the active inference paper it has drawn out this contrast between two extremes of foraging both which can have a pragmatic and epistemic component there's like skin in the game amp body on the ground where you're physically moving you're hurting and you have to go somewhere to find out how it is there versus visual foraging or visual scanning is uh sort of like you have to cost the movement of course but it's not like um where your eye is is directly paying a cost um in most cases so it's kind of cool to see that the wayfinding and the sense making comes into play on these two extremes with the niche modifying body on the ground as well as with a scanning approach okay um another quote we'll read the whole thing but basically we can imagine after hearing all of this that the prediction error is something that you want to minimize uh for effective business operation so a large prediction error could lead to the company dissipating for example through its internal resources being scattered into an external state by being taken into the possession of its unpaid suppliers so that's um marco blanket dissolution via court mediated bankruptcy proceeding um alternatively the company could be acquired and merged into one of its large unpaid suppliers that seeks to move up the value chain from tier one to oem manufacturer in the case of acquisition and merger the description of marco blankets would need to be changed in legal documents so that just totally uh set off my scott david alarm and maybe think about um the ways that the marco blankets um can be designed the the managing synthetic externalities um seeing law as a niche modification and as a technology in and of itself but also one that uh can facilitate other technologies coming into existence or being hindered as we see all the time and then he uses the acronym bolts business operational legal technical social and then the bigger question is like how can we see active inference serve in a de-risking or recovery capacity and then how can we reframe existing or emerging practices for resilience in terms of active inference so this is like a 1980 style merger and acquisition framed in the marco blanket framework but what does the defy active inference model look like what does the smart contract active inference model look like these will be like really interesting questions and how they're going to intersect with formal laws as well as the coders law any comments okay and then i think the last point was just in the further directions of the paper uh further refinement of active inference constructs is needed to enable wide application to social organization employing human and artificial intelligence apropos the practical examples in the paper can provide starting points for research into the challenges of ascribing marco blankets defining generative models handling pragmatics and modeling variational free energy in real world social organization and that's an idea that we touched on in uh our 2020 paper active inference and behavior engineering for teams which is that in the case of remote teams then the entire structure can be observed through the sensory data that's sent to a user and then the actions that are received for example on either side of an interface or either side of a blockchain wallet so for remote teams because they're defined digitally we have access to the entire state space so to speak of sensory and active states whereas for trying to do the active inference model of people in a physical board room that's always going to be complicated even if you have a really effective way to get body position out of the cameras in the room for example so just kind of cool to think about uh industrial design or engineering all these different ideas we talked about in terms of online as well as physical engineering so not just like cars and stuff those are hardware and software as well so we kind of just close by saying um what people can give their last thoughts uh blue and dean just what would a good understanding enable what are some good unique predictions or implications or areas that this line of research might be addressing what are the next steps for this kind of research the goals of the research and then also just what you're curious about asking the author tomorrow and in the following week yeah I think I've laid out some questions um today that I hope I don't forget for tomorrow but um I definitely think that uh active inference in systems and organizations and social structures is definitely important um along with like the validation of your model through other models like through the models of other people and through like this this confirmation bias like oh my model must be good because you have a model that's like my model so so I think that this this existence um you can help to really set boundaries and definitions on on um how how we really form social structures not just businesses and and I mean in business there's kind of a hierarchical component and um but I mean also like society as a whole we form these social structures and how does this social morphogenesis really take place so I think really the author puts out a good um you know way of of kind of delving into social morphogenesis uh even through like quality quality management and industrial engineering but it's it's only first steps in that. Dean any final comments? I'm really looking forward to talking to the author because I think at the end of the day rather than um what's the word here rather than exploring why why these these mappings have been put together in this paper I I'm kind of curious now about trying to figure out so when did when did these parallels sort of form in his mind and get to a place where he could take a lot of this background information because he's I mean he's really cleverly and and and thoroughly sourced here when did he sort of put these two things together as a reflection of one another and where does he see that going forward? Totally agreed it's a single author paper that reflects a really deep understanding and probably a lot of experience presenting and thinking about this and maybe even applying it so those are all super interesting areas to find out about all right well blue Dean thanks that was epic thanks for all the help on the slides and on the stream so that was great we'll uh talk to everybody in the dot one and dot two or later so bye thanks dad