 Hello and welcome. This is Active Inference Gueststream number 43.1. It's May 11th, 2023. And we're here with Tom Fros. Today, we're going to be talking about the paper, Eruption Theory, a novel conceptualization of the inactive account of motivated activity. We're going to walk and talk through the paper and there will be a lot of time for questions. Please feel free to add any in the live chat. Thank you, Tom, for joining and to you for an introduction and overview. Okay, thank you. So yeah, hello everybody. My name is Tom Fros. I'm the head of the Body Cognitive Science Unit at the Okinawa Institute for Science and Technology Graduate University in Japan. And the scientific mission of our unit overall is to take the advances, the theoretical advances in cognitive science that are sometimes brought together under four e-cognitions. So that's embodied, embedded, extended and active approaches to cognition, which to our minds are on the right track in terms of how we should think about the mind and how it relates to the world. But so far, I haven't gotten a lot of exposure and experimental work. And so our unit is basically trying to become an interface where we can do exciting work on the theory side of things, and then try them out and experimental work. And we have work around human-computer interaction. We have EEG brain imaging. We do sensory substitution research. There's all kinds of stuff about behavioral coordination and social settings and haptics. And so it's a very exciting place to be because we're kind of sticking our necks out a little bit and saying, do all these cool ideas that we love and cherish actually make a difference in how science is done in the lab. And that's why we're here. But at the same time, what that kind of ambition makes clear is that some of the theoretical work also needs to adapt and change and advance, such that it more directly connects with the kind of things that we can do experimentally. And so the current paper that we're going to discuss is one of those outcomes where I'm trying to think a little bit more clearly about how we can operationalize some of the ideas that are coming out of the inactive approach, such that we can start thinking about how to formalize them and how to even measure them using something like analysis of EEG signals. Awesome. Very cool theory and practice. And there's a lot of prose papers and a lot of philosophy. And so it's just very exciting, even with the technologies you mentioned, how this really comes into play and how it does make a difference. So what's the story with this paper? How did it come to be and where does it fit into that broader work of the group? Yeah, sure. So let's zoom out just a little bit before I go to the specific paper. So for most of my career, I've been working on work, like we could say origins, you know, origins of life, origins of mind, origins of social cognition, origins of social complexity. And I think that kind of look at origins is shared quite widely among my colleagues. So what we want to understand is what are the conditions, the necessary and sufficient conditions for there to be a living being, for there to be agency, for there to be sense-making or participatory sense-making. And so for most of my career, I've been looking at developing ideas about what are those conditions. And in the background, there was always this assumption that that's pretty much all the work that we need to do in order to have a coherent notion of, let's say, agency in the sense that notions such as active versus passive, or going back to Alvin Noy's famous book, Action and Perception, which was like, you know, the super hot book when I was a PhD student and it was discussed so widely. We never really question what is the action part of this. So how do we define something as being active versus passive? Yeah, there are some ideas about regulation and normativity, things can go wrong and right when there's action in play and intentions in play. But to really think about how those intentions and values could make a downstream difference to, let's say, the neurophysiology composition of the body is something that we just left to the side and didn't even really consider as part of the problems that we had to solve. So somehow doing the attempts at modeling this, so another stream of my research is an agent-based simulation models where we try to look at the minimal dynamical conditions that lead to a certain kind of behavior. So let's say I can ask, what is the minimal setup that I need in order to study social contingency where one agent is sensitive to the dependence of the other's movement on his own movement? And I can do that and analyze that and I can evolve it and I can study the state space of it. And then we thought kind of like our job is done, you know, we've understood the dynamics of it. Now let's move on to the next problem. But something started bugging me about that when I tried to more integrate that kind of work with my interest in phenomenology. And so this goes back to Francisco Varela's work in neurophysiology where he proposed that the dynamical systems theory could be the kind of formal bridge between on the one hand a kind of Husserian style analysis of phenomenology and on the other hand empirical work in neuroscience primarily but also including the body in interaction with the world. And so his idea was that because dynamical systems theory doesn't make any ontological commitments about what it is that's changing in time, all it is doing is describing how something is changing in time. It can equally be applied for mental processes which change in time and physical processes which also change in time. And then the challenge is just to find a kind of general form or structure which integrates both of these processes changing in time and voila, we've solved the mind-body problem. You know, I'm simplifying but that's kind of like the general ambition. So I'm not a neuroscientist by training so I did the best thing I could which is work more at the level of modeling the behavior of systems. But again, the ambition was if I can create a dynamical systems model of something like social contingency then on the one hand I can connect it with something happening in phenomenology and on the other hand I can make a hypothesis about what would be necessary or sufficient in terms of experimental work. The problem is that that kind of work after a while made it very clear that the dynamical systems approach didn't leave any room for anything else to make a difference other than the equations that I was putting into the model which is to say that, you know, as long as I knew the initial conditions of my simulation and as long as I knew the equations that determine how states change over time all the rest is already predetermined. I can exactly replicate again and again by just resetting the simulation. So it unfolds over time in a predetermined way. Sometimes we cannot predict it in advance so it's not necessarily predictable without you running it but it is repeatable an identical way. So there's nothing happening in each time step so to speak that could ever be surprising once you've seen the whole thing unfold. And so that really started bugging me because why I got interested in the inactive approach originally is because it was all about action and perception. It was all about agency. It was all about intentions and sense making that meaning makes a difference to how we behave in the world. You know, what happened to the values or like, you know, we were always talking about that, you know, the inactive approach has an account of how values can be intrinsic to the organism because of its precarious self-production and this goes into the direction of auto-precision and so on. But here I was confronted with the fact that if we describe all of that in dynamical systems theory, there's actually no room for values to make a difference. Everything is already predetermined by the dynamics. And so unpacking that dissatisfaction a little bit eventually resulted in a complete turnaround where I now reject the adequacy of a purely dynamical systems approach to describing phenomena that I have to do with agency and sense making and meaning and value which in this paper are basically a group under motivations. There's something else making a difference to how the agent is behaving over time. So that's the kind of like background story and I have to say that it took me maybe three years to get myself to this point because it's a big break in my own thinking and what I've written in the past and at this point, I'm not sure how many of my colleagues are actually on board with this change. So it's kind of like exciting and nerve-wracking at the same time to say mid-career change that suddenly throws out a lot of what has gone in the past. But this is why we do science. Sometimes you have to change your mind. And so what we can do today is that I will talk to you a little bit through this paper and you can probe it. And who knows, eventually it might change my mind back and say, actually, this is not correct. But right now, I think this is the way that we should move forward and there's a lot of rich implications to further unpack. And so personally, I'm super excited that I had this change of mind about how to think about the relationship between mind and body. And I think it's going to be helping to some extent in active cognitive science or in body cognition more generally to live up to its own expectations that it has something serious to say about how things like meaning and value make a difference to behavior. Is that right for the context? Should I jump into the paper? It's quite a saga and thank you for sharing that really helps a lot. So into the paper we go. Okay. All right. So maybe a clarification. So motivated activity. By the way, can you see my cursor? Yes, we do. Okay. All right. So motivated activity is the term I'm going to use to capture all of these other aspects I was talking about. So meaning, value, intentions, it could also be higher level concepts like beliefs, desires. So anything that has a kind of like agent level association and which is not just if you use Stuart Kaufman's terms, just a mere happening. So something is doing something. And that kind of brings with it normative criteria of either succeeding or failing or being good or bad at whatever it is that needs to be done. So it's a kind of qualitative difference to just mere activity. So because I was, you know, I need a term that captures all of that vast diversity of possible normative behavior. And here I'm just going to say the activity is motivated by something else. And that something else will be one of these agent level concepts. Nice. So supposed to be a very broad theory. It just makes me think of how an enzymatic reaction could be catalyzed, but it wouldn't necessarily be motivated, whereas motivation for an agent could be a catalyst. Yeah. Yeah. So and here I don't go into this, but the kind of in the background, there is the idea that the next one of the next steps will be to generalize this into a notion of biological regulation. And so there's a lot of life sciences happening here at my university. And if you talk to them, you know, they basically treat their systems as mere happenings, right? It was just a purely physical system without any motivated activity. But I'm not sure that that's the right way of thinking about it. So when the cell does something for a reason, like up or down regulating things, possibly this account also applies there, because then notions of success and failure and how much and how little is the right amount and so on, start making a difference of life and death sometimes. And so we should expect something similar to be happening even at that low scale level. So although I'm going to unpack it now in terms of the human level, because that's the most intuitive level, I think that the general principles in the background apply to all kinds of regulation, even unconscious biological regulation. Alright, and just I'm sure we're going to go into it. But what is the etymology of eruption? Right, I mean, we can already get into this now given that we're already quite into the kind of contextualizing part. So some of the influences that have been very impactful for me go back to cybernetics, and particularly the work of Ross Ashby. And he has this notion of ultra stability, which is supposed to be a way of explaining how can a system become stable after being perturbed out of that stability in a spontaneous self organized way, without actually having any kind of like top down explicit knowledge of what is that's going wrong or how it should do to fix it. And so, again, it's supposed to be a generalizable account of biological regulation. And, you know, most work back in the day was taking the opposite extreme of saying, hey, I want to have perfect information about my target, I want to have perfect information about my error, and then I want perfect information of how I should reduce that error in order to hit my target. Right, and that's kind of like control theory feedback based control theory. And we can engineer systems like that, but nature is messy. And a lot of the times, you know, whatever hits a system, you don't know what it is that's hitting it, you just know that something is not quite right. And so actually basically propose that as long as the system is able to break. And this gets close to eruption. That break basically means a change in its structure or a change in its organization. And that basically means that as long as it's not dead after that break, it has a chance to have hit under configuration that actually is the right kind of response to whatever is perturbing it. It's a new kind of organization. So it could be a stable one. And if it's not stable, then it will break again until it eventually hits under configuration and which it doesn't break anymore. And it has basically found a new pocket of stability. Right, so it's kind of like example of stochastic search. But you know, the interesting part is that for Ashby, this break comes from the outside. It's like, you know, he has this example of a kitten being burned by the fire. And it touches the fire, it gets burned, and it has to learn not to touch the fire anymore. Right, so if there is no breaks happening, according to Ashby, the system wouldn't do anything. It would just be static to some extent. It would just maintain its identity and sit there until something perturbs it in the right way and then it would react. And so for me, that was always kind of like a very strange way of looking at life. I mean, that view maybe applies to something like a computer, that if you don't touch it, it just doesn't do anything. And then if you do something to it, it will respond. But living systems are somehow intrinsically active. Right, so they're always doing stuff. They're always changing, adapting, developing, evolving, learning. Right. And so the idea that they are just that all of those changes are only responses to the environment rather than something that is intrinsically driven doesn't make much sense to me. There must be also something on the inside of the organisms that makes them restless in a way. And so one of the early attempts at getting at that was to think of a notion of self-breaking. What if actually systems continuously break themselves a little bit, which basically then has the consequence that the organization slowly also changes over time. And so then they are no longer dependent on their environment for adaptation, but it basically becomes an intrinsic source of adaptation. And so this notion of corruption goes back to Latin, which is the kind of like sudden breaking into or burst. And so it has this meaning of breaking, actually, and then kind of internal break in the system. Alright, so into the into the paper. So like I said, it's, you know, the the main motivation is getting a group on what do we mean by action? And like, that's one of the concepts that goes back to the 90s, right? I mean, like if you think about the embodied mind, the classic text by Varela et al. You know, the way they define the perspective, an active approach is basically by linking action and perception into a loop. But what is action? Can a robot perform an action? Is it enough that there's just mere movement? And this is a long debate also with ecological psychology. Is it necessary that this movement originates in the organism? Or could it also be something that happens from the side of the environment as long as it's a relational change? And so on. And so basically, I argue that we don't have a really good grip on what do we mean by action when an agent makes a difference. And so I review the inactive approach. I'm not sure how much I should go into this, maybe just to say briefly. So this is what I meant in the beginning, where most of the work of the inactive approach has focused on the emergence of living beings with an intrinsic perspective of concern with sense making capacities and so on, rather than the downstream consequences of when that is actually in place. But just briefly, the idea is that when we attribute purpose to an artificial system, we can do that. The function of a car is to drive, and if it doesn't drive, it's broken. So there's a normativity there. We have a criteria for evaluating if it's doing the right or the wrong function. Now, our organism seems to be of a similar kind. We can also look at an organism and say, you know, is it behaving appropriately or not, giving it circumstances in its environment? And if it's not, well, that's pathology, that's disease, that sickness. And so this is actually, you know, at the core of medicine, to some extent, if he didn't have any criteria for distinguishing when things go right and when they go wrong, well, how could we ever try to heal someone? Right? So intuitively, we have we're working with these concepts already. So the inactive approach, the idea is that basically, the main difference between a car and an organism is that the car is produced from the outside, whereas the organism produces itself from the inside. So there's a network of processes of material production, which basically create and synthesize all the molecules that compose our bodies. And those molecules of our bodies are exactly the ones that permit the reactions to proceed and building the body. So there's a kind of self-referentiality in the way in which we are organized. And because of this self-reference, the functions that the body is doing are actually created by the body itself. That means they're intrinsic to the body. And so the idea is that this is kind of the way in which we can move from just a purely external attribution of value and purpose and, you know, classic teleology to some extent, to an intrinsically based one because of the self-production of the organism. And there's one additional point here. So this would be a way of saying the goals are intrinsic. But then there's also the question like, why would anything matter to the organism in the first place? And this is another important difference between life and, let's say, machine like a car. If the car doesn't work, it doesn't care about that it doesn't work. It's just as happy, if I can say that, to not be working compared to working. It makes no difference to the car. Very different for organisms. There will be processes coming in active when things don't go right, in order to try to compensate somehow. And so the idea is that because the living organism's being, it's very existence depends on these active processes. That's why things matter to it. It's because the organism itself is its own creation, so to speak. And not something that is just created from the outside. And so this is why precariousness as a notion is so important for the inactive approach. Given that it's an active process that's taking place under precarious condition, which means that if the processes are stopping or not proceeding in the right way, the entire being of the organism would just cease. That means the being depends on the actions of the organisms. And hence the outcome of those actions matter to the organism. That's why it has an intrinsic perspective of concern on its environment. And that's where meaning comes from. That's why the world shows up as mattering to us and to organisms. All right, so that was a kind of mini summary of, I don't know, two decades of inactive work on sense making. So did that make sense to you? Otherwise, I can go a little bit further here, but I think this work has been much discussed already. So maybe we... Yeah, we can return, but please continue. Okay, all right. So here's a couple of diagrams. So what does this mean in terms of thinking about how value could make a difference? So we've got this paper from, I think, back in 2010 by Dipalo Edel, where once you start thinking about value as something intrinsic to the organism and as tied to its process of self-production, you start to see that how other people use the notion of value in cognitive science can be quite different. And so the diagram on the left, where we have value-guided learning, shows the way in which many architectures in cognitive science think of value, which is basically as a kind of evaluation system. So there's a kind of subcomponent somewhere in your cognitive architecture. And the task of that subcomponent is to make some sort of judgment about its situation and the world, and then pass the outcome of that judgment to, in this case, the behavior-generating system. And so the critique that the authors do of this is to say, well, you know, you can call this a value system, but you could call it anything you want to some extent. You know, you can call it like, I don't know, a decision system, you can call it a comparator system or whatever. But there's nothing intrinsically different about the functions that it does to some other function in this architecture. So there's nothing qualitatively different about the kind of behavior the subcomponent realizes. And it's very localized. So there's this worry that maybe what's going on here is that there is an unjustified verification of value into something concrete and specific inside this component. And so their alternative proposal on the right is that value is something that emerges out of these self-producing self-regulating processes of the organism and somehow shape those processes in the background. And so I think this has been kind of like the standard way of thinking about value for maybe the last 10 years or so on the inactive approach. And it's something that, you know, I was also in that tradition and now I'm just one way of thinking about what I'm doing now is to really ask the question, well, even if a value would be an emergent property rather than a local concrete property, there's still the question of how does that value actually make a difference to the processes? You know, if we remove this word value from our diagram, would it make any difference? And so what I said before about my simulation models is that I mean, I don't see any difference, right? So I can just run my agent based simulation models. And then I can say, Oh, this emergent property here, I want to call that, you know, the value of going towards the goal or something like that. But the system doesn't care about that attribution. I mean, you know, I could also not call it that or I could call it the value of being happy. And the behavior of the system would be exactly the same. It just makes no difference to the dynamics of the system. And so what are the options? You know, how do we integrate this notion of value into our scientific account of behavior? Just one note on this on figure one. The left side is very compatible, maybe even isomorphic with the reinforcement learning computational architectures where you have some behavioral selection module, like you mentioned. And then there's like a secondary evaluation that's like annotating or layering over the process of behavior being generated. But at least value plays a specific semantic role. Whereas as as shown here, it values everything, then it's nowhere. You know, if it's in the foreground, it's in the background. And you have a closed loop, and you have no need to explain anything further. So you might as well just add all kinds of, you know, draft manuscript, not for circulation. And this is how organisms work, you get all kinds of things. But that's not going to break the closure of the self sustaining dynamical processes in the world. And that closure has been highlighted by a lot of the work you've described recently, as well as clients and others who are described in the manuscript. Yeah, so that's right. And like, you know, when we ask ourselves, why is the inactive approach of value not more successful, even though it's somehow compelling, and it's biological grounding, I would say, it's precisely because it doesn't give a better account of the efficacy of value. Right. So, yeah, maybe the traditional accounts has a set of problems, such as like, you know, where do these values come from in the first place, you know, if they can just be captured by functions, you know, do we lose somehow the quality of aspect of them and so on. But at least they do work, right. I mean, at least they have a function in the architecture that's very specific and formalizable and can be implemented in these kind of, for example, reinforcement learning paradigms, whereas like the inactive notion is kind of diffuse and almost verges on a kind of nihilism, if you really want to look at that. So, it's like, if the value is identical with the global dynamical properties of the organism, well, that identity relationship doesn't allow it to have any function in its own right. You know, you could just describe the whole system purely in terms of its dynamics. And so it's kind of like a slippery road. What they want to do is shift the notion of value to the level of the agent. So, as a whole, the agent, you know, has an intrinsic perspective on the world. That means that value shouldn't be for some subcomponent inside the agent. That's the kind of motivation of why you don't want to go with the kind of classical view. But then the new view, where you bring in this agent level perspective with an intrinsic concern for its own suppurization and about the world. What's the scientific account of that? Well, you've removed value now from the subpersonal story and put it in the personal level. But you haven't given us an account of how the personal and the subpersonal actually connect with each other. And so basically, you've actually just like voided the kind of subpersonal level of any kind of relevance in terms of connecting with something on the agent level. So yeah, so I think we're at the same pace. So like, yeah, one way of thinking about eruption theory is to say, we need to do more here that, you know, if we are wanting to make a place for value at the personal level, such that we can say, you know, I'm talking with you because I want to, right? That's an agent level kind of characterization about what's going on. Then we still have to tell a story about how that agent level wanting something actually connects with all the material basis of our behavior, such that it realizes that intention. And that's something that hasn't been done. You know, this is like, these are two different levels of description. And yeah, there's no representations and no values at the level of the empirical story at the material basis. And everything happens at the level of phenomenology. But yeah, there's a disconnect between these two stories so far. Yes. And it's not the same thing as the hard problem or the question of awareness. It's a causal relevance. And the image that came to mind is like a buoy in the ocean. And we explain its movement by appealing to the flow of the waves. And then making a model in which a sophisticated buoy, also going with the flow, is able to navigate and surf or something. You've explained away surfing by a really delicately balanced set of channels such that the flow is one of the same as surfing in that context. But that gives no causal relevance to wanting to surf. And so one is either left with this just a happening view of wanting or to just say, no, it never mattered. Yeah. So your example is quite close actually to the kind of original Maturana example of structural coupling, where he treats people or organisms more generally as a boat drifting on the sea that's being pushed by the waves and the winds. And so the direction of the boat is not directly determined by the wind hitting it. You know, imagine a sailing boat, right? So the sails are up, the winds pushing it, but instead of going in the direction of the wind, what it does is go sideways, right? And so that's his example of saying structure determinism, which means there's a certain kind of autonomy of the system with respect to what impinges on it from the environment. But you know, that's one thing. But the other thing is to say that there's absolutely nothing happening there at the level of intentions of wanting to go that way or that there is a function of wanting to sail in the first place and things like that. There's no room for that in that story. You know, we are all like, you know, just drifting, so to speak. And that's this concept of neutral drift that they put forward. So yeah, so we're basically, yeah, so this is like the, you know, the kind of conundrum that is faced by this kind of biological, biology of cognition and active approach. But I mean also of embodied cognition more generally. So the question is how we can do something different there. Let me see, maybe just to unpack a little bit more of the inactive approach with this diagram. This is the kind of, because we're bringing in biology of cognition just now and autopoiesis or maybe it's good to just say a few words about how the next approach differs a little bit from those previous traditions. And so the classical view of autopoiesis by Maturana and Varela is about self production. So that means there's a system that creates itself and its own identity where identity is the defining organization of that system and that's coupled with this environment. And that's all and that's it. And so as long as you don't die, you're just going to persist coupling and self producing. And if you die, well, that's it. And so just because of that, there will be a negative selection that if you're not particularly well coupled with your environment, you're not adapted, you will not persist. And so that's the whole story of biology of cognition. There's no level of value or intentions or agency or experience. These are all kind of narrative re descriptions but that don't have any kind of causal relevance for understanding the real behavior of these systems. And so enter the inactive approach and they say, no, we need to bring in intrinsic goals and concern perspectives. And this has to do with the finitude of the organism and the precariousness and so on. And then for a long time, it started feeling like we're kind of like starting to create like a shopping list of properties again, like, you know, it has to be self producing, it has to be adaptive, it has to be precarious, and that right. And that's exactly the kind of thing that autopoiesis didn't want to do. It wanted to do a kind of first principle operational definition of life rather than this classic shopping list of what is life where you basically list your favorite properties of living systems and then say, you know, if a system more or less satisfies most of these properties, I'm going to call it living. Right. So it started going again into that direction. And then there was this nice work by the Apollo and colleagues and their recent years, where they tied out the place a little bit more closely with the thermodynamics of it. And that meant that you think about what autopoiesis needs to do in a material context. On the one hand, of course, it needs to maintain its kind of structural integrity. So it needs to be closed off somehow from the environment in order to be able to regulate its interaction with the environment. But it can't be completely closed off because if that was the case, it would just like starve itself to death. It needs energy to do the work. But that energy comes from the environment. So actually, it needs to be open to energy flows in order to be able to do the work and needs to be integrity maintaining. So there are suddenly two conflicting needs here. On the one hand, it needs to close itself off as much as possible. That would mean it would be as autonomous as possible. And yet at the same time, to maximize the energy it has to do its work, it would actually have to be as open as possible to these flows, which would undermine its integrity. So the life is faced with this weird primordial tension of having to satisfy two needs that can never be satisfied fully and never at the same time. And so then the key move is to say that this is the reason why life is precarious. Intrinsically, it can never be in a stable situation. And the only thing it can do is regulate how open or how close it is over time. And that this regulation then becomes the basis for agency. That's why life is different from other kinds of far from equilibrium systems. So this reminds me by analogy to attention based learning, where just dilating and paying more attention or constricting and paying less attention is able to be part of an adaptive strategy for dealing with streams of incoming noisy or partial information. And maybe one reason why attention feels so tied up with agency and so precarious as well, because it's walking that impossible tightrope. And it knows it can't be on either extreme. As you pointed out, they both result pretty rapidly in death. So we're only going to come to see organisms that do have a dynamical navigation of this tension. Yeah, yeah. So actually, this is a general principle. I think that this is the origin of it or maybe like the most basic form of it. But I think it's true of everything to some extent. Every time there is a way in which we're coupling with the environment, there will always be a tension between doing things your way and actually at the same time having to be open to the environment. And another kind of area that's interesting in this regard, because we're doing EG hyper scanning in our lab, is that most of the time when people think about hyper scanning, they think just about the coupling. So the question is, how much interpersonal neural synchrony, for example, is there in the condition? But what they don't think about is that actually, that probably what's happening is that sometimes we are more strongly coupled. And sometimes we are more distant, disengaged. More coupled, disengaged, coupled, disengaged. If we were always super coupled, that would be kind of pathological. I think it would be very hard to maintain that and it would feel very uncomfortable and you wouldn't get anything done. So actually, a sensitivity to this kind of tension actually has methodological implications down the line, because we still don't have to ask questions about the diachronic changes in the strength of coupling of the organism with his environment. And if we ignore that and just take the average, let's say, we might actually completely miss the phenomenon. One other just analogy there is with so-called futile cycles in biochemistry, where you'll have like the forward and the reverse reaction happening at the same time. And people sometimes characterize that as wasteful. Yet when you pull back, it's actually the coexistence of these two pathways that allows adaptivity. And so it's like the paradigm shift is not to see it as wasteful, but adaptive. It's just simply adaptive. It's simply valuable for the cell to participate in that bidirectional activity. Right. Yeah. And just another example, sleep. So in our culture, sleep is not really a value. Energy drinks are a huge industry. It feels like if we could engineer sleep out of people, then people would go for this and would make a lot of money with that. But the question is, given that sleep is a really vulnerable state for the organism, and we find there across all animals to some extent, it must have really important reasons that it wasn't selected against. So even though probably the majority of animals die when they're sleeping, let's say, right, because they don't, they're unaware of dangers and predators and so on. Still, we're sleeping for a large chunk of each day. Why? Right. And so it's this thing, like you need, sometimes you need to decouple. And if you don't decouple, you're just going to dissolve, basically. Think about sleep deprivation. If you do too much, you actually physically die of sleep deprivation. So it's not just, you know, down time, so to speak, it's absolutely essential for proper functioning. And so just bringing it back to the paper, one way of thinking about what's going on here is that it's not just that we are opening and closing. You could ask, like, so why, if it's not a metabolic reason, what could be the reason for this? And we've got a kind of companion paper for this one that's going to come out in biosystems hopefully soon, where we have a little model that shows that if you push a system far from equilibrium, make it an open system, basically, and then close it again, let it fall into its equilibrium state, and then repeat that over time. And at the same time, you add a little bit of historicity in the background, which is the kind of sedimentation of the states it has visited in the past, such that it's more likely to return to states it has visited in the past. What happens is that the system starts building up an associative memory of the stable forms it has visited in the past, and can actually then start generalizing two forms of stability that it has never accounted in the past, and that if you didn't have it kind of set up, would never in principle encounter, because it's just too hard to find in the state space. And so we're basically redoing here the kind of agency diagram that I just explained, and we're just making explicit the historical dimension of this process, that if you're switching between far from equilibrium and equilibrium modes, and there's a little bit of historicity here, that's really when agency emerges, because the system then becomes able to jump to new configurations that would have been otherwise practically impossible to reach for it. So I mean, that's another another story. So you don't you don't need eruptions to to to think about this, but it's kind of fits into the same, you know, large or rethinking about the, you know, what's the role possibly of of breaks, actually. And that's all like, that's the bigger picture again, we should, you know, allow much more room for irrationality, indeterminism, noise, you know, when things don't work the way they should. And just say that that's a necessary complement for things going well. I mean, that's if they're that could be like one of the take home messages of our conversation. Yeah. And that reminds me actually have a lot of empirical work on heart rate variability, gate, gate variability. And one viewpoint, the car viewpoint would be well, you want those steps to be really even length, you want the heartbeat to be like a metronome, you want the breathing to be like a metronome. But in fact, it's the opposite that are associated with all cause mortality, such that higher variability patterns are associated with more vital health. That's right. Yeah. Yeah. And I mean, there are good reasons for that. Right. So there's so control theory aspect to it that we want the controller to have higher degrees of freedom than that which is controlled. And so, you know, heart rate variability is important because you want to keep your blood pressure constant, no matter what kind of situations you put yourself into. So basically, the heart has to take up all the slacks, so to speak of the variable conditions of me getting up from the chair and so on. And the blood pressure has to stay constant. Right. So that's why you have a lot of variability there. But there's a more general notion of thinking about why is variability associated with health. And then maybe eruptions theory comes in again, and a little bit in that as we will see, because it's basically giving arguments for saying that signs of agency making a difference might look like increased variability in the system. So to get there, let me just start to get a bit more narrow in what is the eruption theory trying to achieve. And so in the next section of this paper, what I'm trying to propose is that the challenge that is lying ahead of us is thinking of developing a motivation involving account of motivated activity. So that the reason why I put it like this is that I think the large majority of work on motivated activity. And again, remember, I'm using motivated activity as an umbrella term for Asian level concepts such as intentions, beliefs, desires, experiences, you know, goals, you know, so pick your favorite, let's say intentions, right, or consciousness for that matter. You want to say that consciousness makes a difference for what you're doing, or your intentions. Well, what are the standard ways of accounting for that? You know, we can say that in terms of consciousness, you know, maybe we can go with Walter Freeman's approach, which was quite influential for the early work of Varela and Thompson on near phenomenology. And Freeman says, well, consciousness, you know, we've got this order parameter in the brain, which kind of like organizes the, you know, the firing activity of all the neurons. And so given that that's a high level property, emergent property that has downstream effects, and we think that consciousness is that kind of Asian level property that has downstream effects. So why don't we say that consciousness works on the brain through the order parameter? Okay, so, so that's, you know, let's say that's one possibility, right? Let's think about intentions. We can think about intentions again, in terms of dynamical systems theory, many people worked on it. Varela has a book on it, Deakin worked on it. There's a whole literature and philosophy of action that applies a kind of dynamical approach to saying, well, what what intentions do is they place constraints on the degrees of freedom of system such that it realizes the action that is, you know, needs to take place. And so it's basically emerging constraints that have top down effects on your degrees of freedom, some kind of similar to the order parameter idea. But my main issue with these kinds of accounts, and they're more of them, is that they kind of offload or outsource the efficacy of the high level property to something else. Because I could also just tell the same story that sale the order parameter without talking about consciousness at all. I can say, look, you know, I'm going to be a complete reductivist, physicalist, and I want to understand how the brain works. I could still tell the story of, hey, there's all these neurons interacting. There's an emergent process that has top down constraints on how the neurons are firing. Nothing there commits me to saying anything about intentions or consciousness. So then what's the point? Like, why do we need to appeal to consciousness in the first place? You don't get any explanatory advantage. You know, the neuroscientific story doesn't get any extra advantage. And also, if you wanted to explain something about how consciousness work, well, you're also not saying anything because, you know, it makes no difference to the other story. And so that's the kind of situation that we're in. So that when I say I want a motivation involving accounts, I want something where the motivation itself as such as a motivation makes a difference to the system. And not just because I'm identifying it with something else, like an order parameter or constraint. So I think if we don't have this ambition, then we're basically missing the point. This is really the kind of main challenge that I see facing us. And so eruption theory is one attempt at solving this problem. And in more recent work, I'm going to kind of try to, you know, crystallize this out as the hard problem of efficacy. How do we make sure that our Asian level properties that we care about actually are efficacious in the world? And not just because they're supervene on or identical with or whatever you want to say with properties that are not actually at the Asian level. That's that's to my mind, you know, that's that's where this work is situated. And if you can't solve that, I think it's a huge failure. And we all assume that our intentions matter, and that our beliefs matter, and that our desires matter, and that they make a difference for how we behave. But so far, all our best scientific accounts don't allow this. They cop out at the last moment and say, actually, all the real causal work is being done by other things. Right. And so, so there's always this like danger in the background saying that actually, all of our mental life is epiphenomenal, all of consciousness is epiphenomenal. That's always there, like somewhere lingering in the background. And this is where this debate has been stalling for decades. So I really just wanted to like say, let's bite the bullet and say, you know, we need to try something different. And maybe it won't work. But you know, if you don't even try, if you don't even see that there's a problem here, you know, that that's, you know, I think to my mind, it's unacceptable. I really need to try to do something different. Alright, so the two short thoughts there. If we don't intend for intention to be a difference that makes a difference, all our intentions are on the table and they're all for naught. And then the second thought is the causal efficacy of the map is not causal efficacy in the territory. So just saying that a causal parameter in an equation has an influence on another parameter in equation isn't what one set out to do unless they were doing like forensic equation analysis. Right, right, right. Yeah. So so I think people get lost in the sense that you can do a lot of scientific work in the old paradigm, right? You can happily study consciousness all you want and say, hey, I'm going to leave this problem to the other people. This is the neuroscientists. And the neuroscientists are happily studying how top line constraints affect all the neural time and, you know, they leave the problem of consciousness and tensions to the psychologists. Everybody happily publishes a lot of papers and hopes that somewhere along the line, these things will meet together. But yeah, unfortunately, you know, just saying that there is a kind of mapping is not the answer to this problem of efficacy. So this has been kind of like very clearly, let's say, an example that everybody familiar is like a grandmother neuron or Jennifer Aniston neuron or grid cell neuron or whatever, right? So we can make a correlational analysis and say the firing of this neuron correlates with, you know, the person perceiving this image on the screen, let's say, or being in this part of the maze or whatever. And that's great. That's very exciting, right, that we can do that kind of work. But then the question becomes, well, how is that content, representational content or association making a difference for how that neuron is firing? And then, well, what are we going to say there? I mean, like, we can look at that neuron, all we want, we can study it in its, you know, down to the nano scale if you want, and there's never going to be any place where you can say, Oh, now I understand how Jennifer Aniston makes a difference to this neuron. And it's just not going to happen. You're going to have, you know, chemistry, you're going to have electrical potentials, you're going to have fluxes, you're going to have membranes, you're going to, you know, all this messy stuff that, you know, the, the, the neurobiologists look at, but you're not going to have a point where you say, Oh, I see the desire here or I see the value here. And so to just say that these neurons represent a location in a maze has absolutely no efficacy at the level of describing how that neuron behaves. And again, as long as we don't even see this as a problem, I think we're lost, because we know we can publish all the papers we want and say, Hey, this part of the brain correlates with this and so on. But the fact that it has content, mental content associated with it doesn't appear ultimately in the scientific account of how the systems behave. At some point, it disappears. At a higher level, it's there, we can say, looking from outside the system, we can look at what's happening in the environment, we can ask the subject, and we can look inside the brain and we see all these links very nicely. All right, let's forget about these links, let's go deeper into the brain and like, look at all these, you know, cellular systems and so on. Oh, but what happened to all this other stuff? Right? At some point, it's just magically vanished. And, you know, it's a kind of disappearing trick because we don't even ask the question, how did all this extra knowledge that we were positing at the higher level make a difference to what the neuron is doing here? All we've got is electrical potentials, let's say. Yeah. And so that's the current state of the art where we don't even see that there's a problem here, that there's a gap here. And so some colleagues, how to mine call this the hard problem of content? Like, you know, how is it even possible to say that there is content associated with this kind of neuron, you know, which could be true or false, could be correct or wrong, could be, you know, different levels of accuracy and so on. So all these normative conditions are completely missing from the picture of how the neuron works. And so what I'm trying to say here, there's an additional problem, which is that even if we could give an account of how it is that these neurons also have mental properties, like pairing mental content, there's an additional problem to say, how could that content make a difference to how those neurons behave? And now we're talking at the neural level, but it's the same kind of issue again, like, how do we move from something that's normative to something that makes a difference, you know, empirically speaking in the material basis of it. So yeah, again, deep, deep issues. And the eruption theory is just like a small stab in that direction. But I hope that what it can do at least is get people thinking that, you know, there's a really interesting problem here that we've been blind to and we should pay attention to. So let's get into eruptions theory then. And it has a couple of axioms that I think the idea of these, these axioms are supposed to be general and intuitive enough that hopefully everybody can agree with them. Okay, so if you don't agree with them, let me know because these are not supposed to be controversial. Okay, everybody should be just saying, yeah, okay, I can sign up on this. So axiom number one, the motivational efficacy and agents motivations, as such, make a difference to the material basis of the agent's behavior. Okay, so that's like, I want to make a point by lifting my finger. Okay, my finger just went up. Great. You know, that it's as trivial as that. I'm just saying that, you know, whatever intentions we have, we can see that it has material consequences in the world. This is just everyday experience. Our whole life world is based on this axiom, right? We didn't believe in this, you know, it would like the whole all of social order would collapse somehow. I mean, we just assume that there's a relationship between people's motivations and intentions and how they behave, right? So that's axiom one. Now axiom two brings out the problem here, which is that it has intention with our best science so far. The axiom two says, incomplete materiality, it is impossible to measure the scientific instruments, how motivations, as such, make a difference to the material basis of behavior. And so this is what we've been talking about, you know, no matter how close I study how the brain or the body works and, you know, put it under the microscope, put it under whatever the scalpel open up your head, look inside, you know, your brain, whatever, we never see motivations as such making a difference to what anything behaved. It's just material processes. And so for most people, this is where the real problem is. This is the problem of we will, this is the mind body problem. This is the problem of mental causation. You know, this is the problem of, you know, can they be the causal closure of nature? And like, how does the mind fit into the natural order? All of that is impact, you know, in these two axioms, which each of them, I think we should accept. So I'm not saying we should reject them. What I'm saying is that we should live with this tension and not try to immediately jump to a conclusion that would remove it. Both of these axioms are independently valid. The question is how do we deal with the tension between them? And so a first step in that direction is to introduce another axiom. So axiom number three, underdetermined materiality and agent's behavior is underdetermined by its material basis. And this is crucial. This comes out of the discussion we had in the beginning that from a dynamical systems point of view, everything is predetermined by the laws of the dynamics and the initial conditions. No surprises. And so I could always rewind the tape and know exactly what's going to happen. And that's to some extent the kind of concept of nature that a lot of inactive but also cognitive science in general works with. It's like we're asking these hard questions about how to fit axiom one and two together in the context of trying to fit it into a clockwork universe that wasn't fashion at the time of, let's say, Newton and Laplace, but it's long gone in the rest of the natural sciences. So why are we forcing ourselves to try to solve these ultimate problems in the context of a physics that's centuries outdated? That doesn't make much sense to me. So I think this axiom here might be questionable for some people who really think that science should be in the business of recovering a deterministic worldview. And for them, even quantum physics is not an argument against it because they believe that some future development of quantum physics will show that everything is deterministic after all, like a kind of Einstein's, you know, God's don't play dice kind of view. But I think that, you know, why? Why should we want that? It feels much more natural to say that nature is messy. And like, you know, there's degrees of freedom that can't be always closed and so on. So again, this actually might be the most controversial of the three. But I think in terms of our best science, it's actually relatively consistent with what we know about nature. There are aspects of nature in which we cannot with absolute precision predetermine what will happen next, even if we knew everything about the universe at that moment in this past. And so, all right, that's for all of nature. That doesn't say anything specific about us and how we interact with the world. But that's just to say that the only reason why axiom two looks like its intention with axiom one is if you assume that axiom two also gives you a complete account of nature. Right. If it gave you a complete account of nature, there will be no room for axiom one. If axiom two, that we don't find direct measurable evidence of motivation making a difference. All right. But at the same time, we know that axiom two also doesn't give us the full picture of what's going on. All right. Then the tension is slightly diffused because there's space for something to make a difference, which is not accounted for an axiom two. All right. So, it's kind of just like opening the window of possibilities, it's just logically speaking for the moment. All right. So, those are the three axioms. Now, let's see what does eruption theory make of this? So, the kind of breakthrough, conceptual breakthrough that occurred to me in terms of getting serious about how something like an agent level property could make a difference in the material basis that is described by axiom two, while not violating axiom two, which means that my motivation involving account can't just say, hey, there are motivations inside your brain. Okay. That would be this kind of going back to the diagram maybe that we had in the beginning where there's a value system inside the system somewhere making judgments. Right. So, that's kind of like placing the value inside the organism, but it can also not just be an emergent property like an order parameter, which was kind of like the right hand side of that diagram where values this kind of emergent property. Neither of those works for me. Neither of them gives me a motivation involving account. Basically, they substitute motivations for something else, whether it's a local or global property of the system. So, what would a motivation making a difference in the material basis actually look like? It can't appear as a motivation because it's not. I mean, it's not possible to ask the question of nature by measuring it and then say, no, the answer is not three, it's a motivation. It just doesn't work like that. Right. And so, how can it appear? Well, it must appear somehow as part of the material basis, but it's going to be a very strange kind of object because it's not really an object, because it's normative, because it's qualitative. And so, what I want to do is flip this on this head. Like what the motivation does is under-determine the material constraints. It basically means that let's imagine that I'm recording from your brain and I got like, you know, my time series, a signal, you know, recording and all the kind of material organizational properties of your body and the environment are all constraining how, you know, your electrical potential is changing over time. Now, if I wanted to say that something is happening in your signal here, which is because of a motivation, making a difference to how you behave, well, that motivation cannot be captured by the material organizational constraints directly. It's something else making a difference. And something else making a difference from the point of view of the material organizational properties of the system will look like noise. It will look like something breaks in the system that's unpredictable, something that is unintelligible. Like no matter how much I study the system, no matter how much I describe it, understand it, something just happened in my measurement that I cannot fully account for no matter what precision I have, no matter how much I know about the history of my recordings. And that's because it's the motivation itself that's making a difference and it cannot be captured at that level of description. And so the way I put it here as a working hypothesis is that the more an agent's embodied activity is motivated, the less that activity is determined by its material basis, right, because other things are shaping the activity and they are not identifiable with the material basis because their motivations, because their desires, because their beliefs, because their wants. And so although all of those have a material basis, they are not identical with that material basis. They make a difference, but that difference cannot be reduced to a difference just at the level of the material basis. And this might sound a little bit strange, but to some extent, it fits quite well with certain trends and the inactive approach to link the philosophy that it does with eastern style of thinking. So in Japan, for example, we have a school of then, which has a nice phrase, which was very popular for Virida, for example, that says, if you think about the relationship between the mind and the body, it's not one. Okay, so you can't just collapse them. But it's also not two. You know, so I complete independence. The right way of thinking about the relationship between the mind and the body is that it's not one and not two. Right, and that's it. That's all they have to say. It's not one and it's not two. And so it's, it's a kind of strange, intermediate thing. And that means that there are cross-border effects, their interaction effects, one can make a difference to the other, but without collapsing into the other. And so to recognize when this happens, when does the one side reach into the other and make a difference without becoming the other side, that difference will look unintelligible from within that side is something that cannot grasp on its own terms, because it's coming from somewhere else with a different kind of specific qualities in terms of its ontology. And I mean, this is all very abstract, but like, you know, when we go concretely now, it leads itself quite nicely to quantification, because we do have measures of surprise. We do have measures of unpredictability. We do have measures of noise in the system. And so the way in which I propose to as an approximation to measure eruptions is just through entropy. What they do, what is happening is that your motivation is causing a certain kind of disorder in the system. Disorder in the sense that purely material organizational constraints are bracketed for a moment, which is the same as saying that they're having less of an ordering influence on the system. So it's a disorder. We can measure that entropy. Now in the classical libertarian work on this, you know, once you get rid of determinism, there's always this danger of like, well, doesn't that mean that everything is now random? Like, how would that be useful? Right? But now from the point of view of complex adaptive systems, that's not a worry anymore, because actually we've got lots of interesting work that shows that breaks, you know, going back to Ashby, actually the source of adaptivity, you know, that they're what allows the system to be flexible and to adapt to its environment. That's what is exactly what is needed in order to be flexible. And so far from saying that, what you said in the beginning, like earlier, that variability is actually a sign of health, right? So to some extent, you know, it's not bad that the way in which our motivations make a difference to our body is by destabilization, by this ordering, by bracketing constraints. That's exactly the kind of degrees of freedom that our bodies need in order to be flexible and healthy. So it's a very nice fit with what we already know, actually. We know that these are markers of consciousness, of agency, of health. But now what we have is a kind of larger framework that explains why is it the case that, let's say, neural entropy tracks levels of awareness? Oh, that's great, right? Now it fits very nicely. Now we can, it's not just no longer empirical work that says, well, you know, we were measuring this and we're correlating how much entropy with whether you're asleep or comatose or in a psychedelic state. But actually, we can say, well, you know, giving our a priori principles, the axioms that I outline, we expect that when these things affect each other, given that the axioms cannot be violated, this will express itself as a certain kind of perturbation or disorder appearing in our measurements. And that's what we find. And so now we have a really nice theoretical framework that can actually explain to us why our variability and entropy associated with agency and with health. And so, yes, I'll try to do a little restatement and see if I'm on the right track here. So I'm imagining a person in a device that can move their arm. So in one situation where this device entirely moves their arm for them, 100% of the variance in movement is explained by external observables. And so whether now they might have not had a motivation to move it, or maybe they really did. But at that would be epiphenomenal, because it basically didn't make a difference. The work was all done by the external apparatus. So we can say 100% of the variance was explained. What is every single situation other than that? Well, it's less than 100% of the variance explained. So there's some difference. There's some, you know, R squared that's between zero and one. And in all like real living and breathing situations, it's a lot less than one. So there's some space where the material basis, even if we believe that it were fully measured, wouldn't fully explain it. Exactly. The second piece that I see happening is motivation, at least in our lives, at least in my to-do list, it feels like an ordering principle. And there's like a way or there's a sense that it is an ordering principle. But especially in connection with Ashby and the brakes, motivation could be an adaptive disordering principle that actually supports the movement to adaptive regimes. So it locally disorganizes, but then at some mesoscale or perhaps some macro scale, it provides an ordering. And as per that first apparatus thought experiment, the extent to which motivation is disordering, the space it has to play is in the unexplained variance by the material consequences. That's exactly right. Yeah. Yep, you got it. So let me just unpack a little bit more. So good that you prompt me. You're right. So we are not anymore on this view directly in control of our behavior. And that's a little bit disconcerting at first to grab like, what we can do to our bodies is set the conditions. We can open spaces or we can open degrees of freedom, we can open spaces of potentiality. But we have to trust our bodies and their environments to fill those spaces in the right way. So again, opening a space and say, voila, right now is the possibility for a new behavior to emerge or to behavior to switch to something else. But what behavior is going to be selected? That's not directly under our control. That will depend on our history or predisposition, our bodily capacities, our body memory. It will depend on the affordances of the environment, what kind of interaction process are we engaged in. And so suddenly all of this work and embodied cognition fits in very neatly. All of that suddenly comes into play and says, actually action self-organizes based on all of these criteria of historicity and then body, man and situatedness. What we do as agents is open spaces for those processes to take place. And that's our agency. And then of course, we kind of offload the rest to our bodies. And as long as everything goes well, our bodies are actually quite capable of responding in the right way. But that's something that's acquired. That's not something that's pre-given, so to speak. We need to undergo the right history to acquire these dispositions and these tendencies such that when we're in a situation and we open the space, the space is filled in the right way. And that's very interesting. It's kind of a mixed concept of agency that we're partially in control, but just the last bit includes our grasp. We can't force it. But I feel actually that to some extent that's kind of what experience feels like. I can try to make this point by speaking and I kind of find the words most of the time, but I just have to trust my body that the right words come out. I cannot choose the words in advance. And the same is true of all of our actions. I can make an intention to reach something, but sometimes I miss reach. And for some people who have a body that's not quite optimally organized, reaching can be a big problem actually. So we really are kind of dependent on the body to fulfill these spaces in the right way. And affordances are a big part of this too. So there's this general idea that we should choose our environments carefully, because they will shape the way we behave and think and so on. And yeah, it fits again perfectly with the story that we open the spaces, but then the environment will solicit certain kinds of propensities from us. And we're not always fully in control of exactly what it is that will come out of us. And so it's a kind of compromise between a kind of libertarian point of view, a full control over our behavior, and a kind of deterministic point of view of completely lack of control of what's happening, where, yeah, we can decide to open a space of possibilities, but actually it's other forces that close the space. Awesome. Please continue. Many, many interesting angles. I mean, that part, I skipped it, but good that you brought it up, is described also in the paper in terms of a thesis of attunement. So it kind of goes with this, that I have three thesis here. One is the eruption thesis, which says that if they are cross-border effects, we will see them as a disordering consequence in the domain that's being affected, because it's coming from outside of that domain. And then if that's the case, we have this challenge in front of us, how do we explain that this order actually does some good work for the organism? And one thing there is that there are two possibilities of this order. One is that it could be a whole system consequence. So, you know, there's a lot of work looking at self-reinferentiality, and Barilla himself tried to link other places with Goethe's work on the incompleteness theorem and so on. And so maybe actually eruptions are system-level consequences of self-reference and incompleteness at that level, but, you know, that's for the work. Or it's something that happens very locally that, you know, at the smallest micro scale, there's a kind of glitch in the system and then permits kind of like a disordering of some, you know, particle configurations or something like that, basically, you know, using up some energy there. That's also possible. If that's the case, we would have to have a scaling up effect. So, how do we go from a small disorder that is barely even measurable or maybe not even measurable because almost infinitely small perturbation to something that makes a difference at the level of the scale of behavior, right? But again, we've got a whole nice body of work in artificial life to slot in there, which is that the organisms are organized so to be sensitive to very small differences, right? So this is a self-organized criticality. We can talk about chaos. We can talk about scale free dynamics. So there's a whole body of work that already shows us that the body is organized so as to amplify very small perturbations. That's nice. Now, again, we have an interesting idea of like, why is it the case that the body is organized in that way, in this kind of poised way? And then if you do have eruptions scaled up to the macroscopic level, still there will be the sense of like, that's not useful really to have a huge disorder washing through your body, unless you're doing it in the context of being attuned already to the environment in the right way. In which case, Ashley, it will become a source of flexibility, such that you're no longer trapped into kind of committed coping. Like, I have to continue like a high-level hammer, right? I have to hammer. I have to hammer. Like, my house is burning down, but I have to hammer. I have to hammer. We'd be very bad if we are really good at doing a single task and could never switch to another task. And so how open are we to switching tasks? That openness to switching tasks can be regulated by the eruptions at the macroscopic level, but like I said, exactly what action will happen next will depend on how we are attuned to the situation. And so this attunement then brings in this whole work in embodied cognition of how a lot of our action selection happens in the organism environment interaction process. It makes me think about reservoir computing and about the environment as a dampener for the eruption. Because if we do grant that there is endogenous excitation, so it's not just an outside in the car, the computer world. So we are going to grant that there's some kind of endogenous generativity or spontaneity. Then when it interfaces with the environment, there's basically two options from a dynamical systems perspective. Either the environment could dampen the excitation, like you could yell and then that could dampen, or it would be a positive feedback and it would be like a spiraling out of control at which point that coupling would basically escalate until it reached some other negative feedback point. So the environment plays a critical dampening role through attunement so that that break can find a new non-equilibrium steady state or a new path. Right. So there's a kind of question here of what's the right balance for how much eruptions do you need in order to be in optimal situation. So as you said, there could be situations when it happens too much that you don't ever have time to properly settle into a new attractor or into a new activity. And maybe we could think of some disorders like schizophrenia in this way. There's so much motivational involvement in what's going on all the time, hyper reflexivity, that you're basically destabilizing the system constantly and never allowing it to properly settle. And there's interesting links there also with the fact that their body memory gets lost and so on. So there's that. It couldn't go in the too much direction. And also there's the other direction that says, what if it doesn't happen enough? A certain kind of rigidity will set in. You will not be able to switch context easily. You will not be able to deal with novelty easily. And maybe something like depression could be something in that direction that you feel like the world is closed off to you. There's no longer any possibilities for change or development. And that brings this whole work into another direction, which is, can we use it to better understand how to treat people? So what if the schizophrenic, what can we do for them to kind of like dampen more maybe the eruptions that they're constantly sending into their body? Whereas for the depressed patient, maybe it's the opposite. How can we make them more susceptible to being open to eruptions? And interestingly, it can be both ways, right? So maybe in the case of depression, it's something that's happening at the level of the person level, the agent level, the experience level, at the level of motivations. And the body itself might actually be fine to some extent. That's an interesting possibility here, right? And then in that case, we have to work on the level of their experience. How do we get them more motivated to some extent? And then that will kind of kickstart things. Or on the other hand, it could be a physiological thing, right? You know, at the person level, everything is fine, except that the brain, maybe for example, in Alzheimer's and dementia is crossed it over. And it's no longer susceptible to these degrees of freedom being injected into it, right? And so what looked from the outside as two cases of becoming maybe more rigid and less responsive and less flexible might have opposite sources in terms of does the disorder lie more at the cause side of their options or the consequence side of their options, right? And so we can think about this now, because previously these things were collapsed, right? We couldn't easily differentiate them. But now we have these two possibilities to look at this. And a lot of the interventions that do work in the clinic, let's say for depression, for example, kind of feel like they're almost like simulating eruptions, right? So why does psychedelics work so well for depression? Well, because it's a huge destabilization of the system, right? And it works both at the motivational level that you have a huge increase in the degrees of your cognitive processes, of your experiential processes, which then also, you know, like destabilizes the system at the brain level. Or another example, obsessive compulsiveness order. And, you know, why does something like deep brain stimulation help? Or electric convulsive therapy in other cases, like, you know, for schizophrenia, why is it that these seemingly kind of crude interventions of just injecting a lot of arbitrary noise into the brain, which is irrelevant from the point of view of what the brain itself is doing, right? So it's undergoing its processes. And suddenly, from the outside, we inject, you know, like these hard, you know, activity that doesn't is not contextualized. It doesn't fit in with what the brain was already doing. And yet, what happens is that it actually makes people feel better. It makes them feel refreshed. It makes them feel more stable. It makes them feel more flexible. Why? Why should the injection of these kind of like breaks of activity have these consequences? And so with eruptions here, we can actually start to conceptualize this in a different way and think that, you know, what we're doing there is maybe amplifying or substituting, you know, eruptions artificially. And the body is there being responsive to them. And then once things are like more, you know, let's say, well, oiled again, the person themselves can like, you know, take over the role again of, you know, doing eruptions themselves rather than having it induced from the outside. And it makes me think of the kind of unreasonable efficacy of just general physiological interventions like sleep, walking, eating, cold showers, meditation. Like, why does it all, why do all roads kind of channel back to these large scale breaks? Because they at a deeper nested level allow the adeptivity to exercise. Sorry. Yeah, especially the exercise one, you know, there's a recent paper in Penas about, you know, measuring neural entropy in a huge ephemeriety set and then ranking like what's the highest level and actually motion based tasks have the highest level of neural entropy associated with it. And, you know, for people doing EEG, you know, we kind of know it because it's a nuisance, right? You say, you know, how can I do my EEG? And I have to like keep people sitting and not moving because if they start moving, my signals go all over the place. And, you know, a lot of that is muscle artifacts. But, you know, in general, the idea is clear, right? Like the more you move, the more things are happening in your brain. And so, yeah, just going for a walk, you know, actually could really make some differences. Yes, many. I mean, this is the heart of the inactive quest and of active inference, which is if we only wanted to describe the special case of active inference where there was no action or where actions were being caused by something that didn't matter, then it would collapse back onto passive inference. But the challenge of scientific analysis of behavior is actually serious. And I think that you're pointing to some kind of figure and ground type conceptualizations such that what has been thrown into the variance unexplained category, the motivation was getting thrown out with the bath water. That's right. Yeah, that's a very nice way of putting it. And maybe just a final thought here, and that has to do with the replication crisis, something just to ponder, right? Like, why is it that in psychology, it's so hard for people to behave the same way in the experiments, you know? And we try to counteract that by having larger sample sizes and more control conditions and so on. And, you know, if you can't replicate, we think that, you know, I guess it wasn't controlled properly or we didn't have enough sample sizes and so on. But what if all that variability is just actually a positive signal that someone, you know, took the task seriously and made a difference to what's going on? And people are not like machines. Like, you know, we can't control directly how we will respond to the situations. What we can do is open degrees of freedom and then let the situation, you know, help us self organize the action. And so, of course, if you fully control the situation such that, you know, there's only two action choices or something like that. And, you know, everything else is also pretty like fixed. Well, then you can get people to behave like, you know, like their reactive systems almost and then things are replicable. But you might just lose completely what's interesting about people in the first place. And so we might have to think turn things around and just say, how do we operationalize this variability in such a way that rather than throwing it out or trying to control it away, we kind of like take it as that's the marker of the person actually making a difference here in the task. And, you know, like in physics, similar revolution happened, right? There was a lot of annoyance before quantum physics came around of like, why do we getting this interference and so on? And it took a while for people to accept that actually the way forward is to just accept this uncertainty and operationalize it and work with it and get equations to work with it. And then suddenly, you know, everything, you know, we've got the most precise signs in our hands then. But it had this, you know, this strange switch where we suddenly had to realize that the kind of uncertainty was not an artifact of us, you know, not observing things in the right way, but was a property of the phenomenon itself that we were trying to understand. And I think we needed the same thing for people and agents more generally. And this is the classical anomaly in the Kunian paradigm change, which is it's not that normal science is unaware of the future paradigm. It's that it has a error variance explanation. We made the partition between signal and noise as we know it today. And that's noise, which means we can describe it with the Gaussian, which means it has all these ninth mathematical properties. And so it has this kind of empirical feel of a total statistical model, which can be convincing. But as were the total statistical models of epicycles and pre-quantum photoelectric phenomena. Yeah, you're right. So, and I think one of the things that, I mean, people, I think we're always frustrated by, you know, the large error bars and so on, right? We can, you can handle with them, but we know we prefer not to have them. But I think the switch wasn't possible yet because we didn't have a good grasp of why there should be this uncertainty. Like what explains it? Where does it come from? When should we expect it? And when not? Under what conditions and how can we formalize it and operationalize it and measure it? So I hope Eruption Theory points a little bit in that direction by saying, Hey, for example, if you look at neural entropy in the brain, maybe it's actually, you know, partially picking up on the subjective involvement of making it that's making a difference here and how the behavior or the brain, you know, processes activity. In that case, you know, it's not just that we're any more like looking at the error bars, but now it's becoming something itself that's of interest to us and that we want to study and understand. So hopefully, you know, it's kind of like an, you know, facilitate the paradigm shift, because if you don't have another alternative, you know, you're stuck with just the error bars, right? But I think we're seeing like the light at the other end of the tunnel that maybe we can head towards from now on. One joke, and then I'll read a comment from the chat and then you can give some closing thoughts. The joke, formerly known as error bars, now known as motivation bars. Okay, I'll read it. I'll read a comment from the chat and then you can give any just any reflection or any closing thoughts or next steps. So IA Rowland Rodriguez wrote, I think if all behaviors described as agential are just products of a causal chain of interactions and exchanges in and out costs, then there isn't this free leap from nowhere. The modeling of organisms doesn't seem to be able to account for this gap in the mechanical unfolding of energy flow, where an a priori is not an empirical sedimentation and habituation. Irem Perret also means to impose. So one question is, is life hard to model because it imposes priors seemingly from nowhere as regards causal dynamics within the infinite encapsulation of blankets, i.e., like the opening question regarding the reduction of action to mechanical motion. Is the imposition of vital priors hard to account for in mechanistic emergentist modeling? Wow, there's a lot to impact there. Now let me think. So there was a mention of energy. So this is something that I'm very keenly working on at the moment, and I think I just mentioned it in passing. But if we think about eruptions as a disordering consequence, then let's say this is just a kind of thought experiment. But let's say that we can look at a particle moving through space somewhere in your body as part of some sort of material flow. And then eruption happens at that point and that location, what would it look like from the point of view of measuring this particle? Well, suddenly it would become unpredictable. Instead of just moving forward, it would just move in some arbitrary configuration because it's no longer constrained or only constrained by the kind of forces that kept it straight in the beginning. And that's basically an increase in disorder from the point of view of the system, which means that whatever energy was contained in that particle now is no longer available for doing work. And so one interesting implication, which is still totally crazy, and I'm even almost scared to say it, is that mental work costs energy just like physical work or mechanical work. Both of them cost energy, but not for the same reason. So it's not that mental work costs energy because ultimately it's just another form of mechanical work. It is its own drain on energy, according to this idea. Now, that might be a very small change. Maybe it's not even detectable or it could be a big one. Who knows? I mean, the brain is the most energy-expensive tissue of the entire body, so who knows whether all the energy budgets add up the way that we expect them to? I guess that's some interesting empirical hypothesis that could be somewhere looked at. And what was the other one about the imposition and the priors and so on? So yeah, so I've kind of left out the origin story a little bit in the eruption theory. I mean, I gave the kind of an active story of why should we think that organisms have an intrinsic perspective and why they're different, why living beings are different from other kinds of powerful equilibrium systems. But I actually want eruption theory to be independent of that a little bit. That's my preferred way of getting to an agent. For me, that's a compelling story, but you can have any kind of story you want and you could still plug in eruption theory at the point of saying, now that I've got an agent, how does that agent make a difference to the material basis that's giving rise to it? And then in eruption theory, we can help you understand those links. So I guess one more way of thinking about this in terms of a more cosmological side is that if eruptions are on the right track and are actually a genuine source of entropy in the organism that can't be reduced to other sources of entropy, well, it increases entropy production. So that means life due to its active regulation due to its agency produces more entropy than non-life in the sense of other far from equilibrium systems like a convection roll or something like that, which is a popular example for let's say ecological psychology to show that increased heat transfer actually has organizing tendencies. So if you have things rolling like this, heat can flow more easily from the pan to the air and top. Okay, but if it's living and it's actively regulating those flows, it will produce even more entropy. And so if we think of the general tendency of the universe towards an increased entropy production, then living systems become actually a kind of natural consequence of that unfolding in the sense that they're adding another level of complexity on top of other far from equilibrium systems that is another source of entropy production. And so it's kind of like that would mean that life is kind of expected or spontaneous rather than something that is rare and weird in the universe. Wow, the closing image for me is I'm watching the water boil and it's already far, it's rolling but jiggling it and just helping it boil and having that just channel energy. Right, so that's a metaphor for what life is all about. Like, you know, what we're doing is just like, you know, channeling the flows. Well, thank you very much for this excellent guest stream. I hope that the work is well received in your journeys. And please always feel welcome to come back when the time is right. Okay, great. Thank you so much for having me. That was a lot of fun. Awesome. Thank you, Tom. Farewell. All right, thank you. Bye-bye. Bye.