 Okay, hello and welcome everyone to Act In Flab. This is the Act In Flab live stream and I'm pretty sure we're live. Yes we are. It's number 22.1 on May 18th, 2021. And this is gonna be really great discussion. So I'm looking forward to it and thanks again everyone for joining live. Welcome to Act In Flab. We are a participatory online lab that is communicating, learning and practicing applied active inference. You can find us at the links that are here on this page. This is a recorded and an archived live stream. So please provide us with feedback so that we can improve our work. All backgrounds and perspectives are welcome here and we'll be following good live stream etiquette. Welcome Scott. And raising our hands to ask questions and hear from everybody because this is really gonna be a special discussion. We're here in paper number 22 and on May 18th today. And next week on the 25th, we're gonna be having two discussions on the paper Psychophysical Identity and Free Energy by Alex Kiefer. So thanks a lot Alex for joining us live and engaging with the dot zero video because we gained a lot as a lab working through that. And so we're ready to take it to the next level through conversation with you. Thank you. And today we're gonna be learning and discussing the papers link is here. And the video is just us unpacking and exploring raising questions. And we're gonna be also taking notes so that in the dot two, we can be following up. We're gonna start with some introductions just to introduce ourselves and also maybe to raise any thoughts or questions that we had at the outset about the paper. So everyone can feel free to write down points or write down questions so they don't forget them. And we're pretty much just gonna jump around to any topic or question that drives us along and we'll flow with it as people join and leave. And there will be a follow-up discussion 22 next week. That brings us to the introductions and the warmups. So I'll start and we'll pass it around and then end up with Alex who will provide, I think a short introduction to the paper and then we'll continue the discussion from there. So I'm Daniel, I'm a postdoc in California and something I'm excited about today was something we were just speaking about right before was learning about a transparent code and understanding what that might mean. I'll pass it to Scott. Hello, I'm Scott David. I'm at the University of Washington Applied Physics Lab where I run a program called the Information Risk and Synthetic Intelligence Research Initiative. And I'm delighted to be here today and I'll pass it to Steven. Hello, I'm Steven, I'm based in Toronto. I'm doing a practice-based PhD at Canterbury Christ Church and I'm really excited about the psychophysical areas being brought in with this process with energy. So I'm intrigued because that's connects to some of the research I do with communities in their landscape and how to understand that. So thank you and I'm gonna pass it over to Alex. Thanks, hi everyone. I'm Alex Vyatkin, I'm a researcher in Systems Management School in Moscow, Russia and also co-organizer for Acting of Lab and I pass it to Dean. Hi, I'm Dean, I'm up here in Calgary, Canada. I'm retired and I just try to keep up with everybody here. I'll pass it to our special guest today, Alex. Is Dave, no, David has left, sorry, okay, continue. Alex Kiefer. Hey, thanks, yeah, I'm Alex Kiefer. I'm a researcher in philosophy and cognitive science and I do some modeling as well. I have an affiliation with Monash University. Currently also a lot of my time is devoted to working for a nested minds solutions which is an enterprise started by Maxwell Ramstead and some other people. Just recently, we've already got a lot of work under our belt. So I'm really happy to be here. Thanks for having me. This is like, I think gonna be a really fun discussion. Cool, maybe just while the other participants are preparing a thought or a question to lead off, Alex, just how did you come to writing this single author paper? What in your path led you to write it this way? Yeah, sure, so this is an idea that I think first sort of came up for me in about 2015 or so. So I've been fascinated by this formal analogy between optimization and processes and thermodynamics and physics for a long time since, well, not that long, maybe 2012, 2013 when I started getting into the neural net literature on home holds machines and things like that. And so I think this analogy between physics and optimization has been tempting for a lot of people to want to collapse these, right? And this is something that people like my friend Mel, Andrew's constantly worn against is like just taking the physical informational description of the system to be the same as its physical description. But I thought that there was a compelling argument to be made here. So I wanted to actually articulate, not necessarily endorse, right? And I think the way I couch things in the paper is a bit cautious on this, but I wanted to articulate what is the most sort of the simplest, most kind of perhaps radical perspective one could take on this, the relationship between the cognitive dynamics and the physical dynamics that you might be able to defend, right? From a sort of free energy principle-ish perspective. So I wanted to just lay that one way of understanding this relationship out that was as clear as possible. I think in the process, I ended up writing a paper that many people find to be quite complicated. So this was supposed to be an exposition of a simple idea. So maybe we can see if we can get back to that. Awesome. So anyone can raise their hand and we'll just start, but I really liked how you said articulate, but not endorse. You've put out an artifact for us to interact with and to explore and you have framed the psychophysical identity thesis as a hypothesis. So it's something that evidence can now be used to support or to disprove. And that's sort of a nice piece of work that now obviously we have started a discussion around. So we'll go to Steven and then Scott and then anyone else with a raised hand. Yeah, one thing I like about what you're doing here is you're sort of bringing it into the domains where the action is happening. It's a bit like most of the physics around thermodynamics is really often used in chemistry. And a lot like Niels Bohr was actually a chemist. Wasn't a physicist per se, he used physics. So there's an interesting thing about how these areas get transposed over, and maybe layered again to biochemistry, which maybe he's trying to look at the mechanisms that sit on that and X, Y, Z. So I think that's quite relevant to some of the questions we've been wrestling with but what happens when things start to transition to the ontology? And how can you keep a handle on that when you've got people from different backgrounds? So I think that's quite interesting. Awesome. Thank you, Steven, Scott, and then anyone else with a raised hand. Yeah, I'd like to just ask about your, having written the paper and now contemplated it, discussed it with folks. What's your impression of the role of the observer of difference, of gaps, of differences in identifying sources of free energy, exploitable sources. Let me back up. One, I did a paper in presentation in MIT in 2013 called Entropy Accounting. And I asserted that Carnot's equation, the hot and cold differentials that are reflected there are the same as arbitrage equations in markets. And I asserted that heat engines were produced by the human mind observing an attribute because it didn't matter that we thought we had a flow of caloric and it was flowing downhill. We actually, the notion of flow was what we asserted in order to convert random heat, random motion of particles, which we really only use for warmth. Being able to put that into a cylinder, a combustion chamber and then perform work took thousands of years. I wonder if you could comment on, it feels like we're using information like the cavemen as per warmth and not yet putting it in a combustion chamber. And the combustion, is it the fact that the processes where differentials are captured, exploited, utilized, I'm not sure what word to use here, is that like a reaction chamber? Are we reacting observations and data into inference in these reaction chambers of mind similar to the differentials that we exploit to get a heat engine for work? Are we able to capture, what are the devices of capture? Sorry for the long question. Your hand on your chin makes me think that maybe it's an intriguing question. It's definitely intriguing. I probably can't do justice to it, but I'll try. So just on the last point, I guess I think that the, well actually, so first to take your question about information, whether we're using it, sort of like whether we're really exploiting it to its maximum possibilities as information or just treating it as sort of, something like warmth, a source of warmth. I mean, I take it that insofar as we're using information, so it gets to the heart of my perspective and to this perspective of this paper, I guess. So I said I floored this thesis as a possibility, but I do kind of have some sort of deep seated conviction that that information can always be looked at sort of in terms of what it represents and physically. So any process of information, information is literally a process of physical change as well as a process of some system coming to know something. Right, so to take the etymology, just go back to Shannon's original framing, information is just the amount by which someone or something has been informed in my view. So I think insofar as we're treating something as information, we are using it as information, right? It's not, no, I don't know what the, maybe I'd love to hear more from you on this. I don't know what the analogy would be to using information just as heat. Is it like we're aware of something, but we're not really doing anything with it, like in terms of inference or, so I guess my view is that we are, insofar as we're thinking and exploiting information sources for inference, we're using it to its, maybe not maximally efficiently always, but we're using it in a way that's intelligent. But just one more point in this, like the analogy to combustion engine, like the idea of, I guess my thesis here is that anytime we're performing an inference or any systems performing an inference, the physical aspect of that is something like, right? It is an actual physical reaction. So there is always something that has precisely that nature behind any use of information. But I think everything I just said sounds to me completely trivial as I'm saying it. That's its power. Yeah, Scott. What did you mean? We can unpack that because it's known that heat is not just simply wasted, but where there's a heat differential, there can be work. And potentially where there's an information differential, there can be an analogy to physical work but in the informational regime. So what motivates that line of thinking for you? Yeah, it was, well, and again, I'll send around the slides from 2013, but the idea was that it is not by analogy. It really is a capturing by the mind of an attribute of a system, right? You look at a block of iron, it has spin, it has temperature, it has color. There's all sorts of dimensions that it has that are unexploited. And then we can look at them and exploit them for magnetism with spin and then temperature or electrons flow through iron. Same block of iron is just sitting there, but you measure different attributes and as you measure different attributes, you measure them into existence and that existence then can be exploited. And so we did that with heat, right? They thought that heat was caloric. They actually thought it was a fluid, right? But then the guy who was the cannon borer, I forget which one, the guy who bored the brass cannons, he said, wait a second, it can't be a fluid because it's unlimited. It's a quality of the material. So I think what we're talking about in information, it's a quality not of the brain of them, it's quality of the mind and the differential between the brain and the mind. There's a mind out there to be learned. Let's call it an environment if it's animals, but if it's humans, it also involves language. And when you're born, you don't know it. And then you go learn it. So your brain has to get tuned into the mind. So to me, the mind is our combustion chamber. And you gotta learn the combustion. Now, if you're different, learn different things, if I'm a hunter gatherer, I'm gonna learn different things in my environment to combust, right? I'm gonna have different cues and I'm gonna say, wow, this is relevant for me because I gotta do this. And that's the, so we train, I think we train our combustion chamber, it's symbolic, right? And so we're training this thing in which to contain the differential accomplishment of the active inference, the Bayesian accomplishment. And then we train. So it was an article in just the last thing, and I'll shut up for a minute. It was an article in Nature magazine a couple of years ago where they said less than 12 month old human infants learn in a Bayesian way. If you point to a, if he's telling baby, a horse has four legs, the baby will point to a table, a coffee table and say horsey. Then you say, no. Scott just disappeared. It was too much of a truth drop with the table and the legs. So Steven to you and we'll return to Scott when we can. They freeze, sorry. Okay, Scott's back. Anyway, continue with, okay. Continue from the table, the four legs. Yeah, anyways, Bayesian, the Bayesian accumulation, you point to a table, then you point, say, no, it has to have hair, they'll point to a dog and say horsey, then you say, no, it has to have a mane, they'll point to a horse and say horsey. So it's a Bayesian, the article was about Bayesian assembly of human knowledge. And so what if the baby is not really assembling the knowledge but just tuning their brain into the mind and the mind's a combustion chamber. And the mind we teach, we have to learn the mind because that's the useful combustion. We've already designed this community, mind to navigate space and we teach it to our offspring in order to make them more fit. Anyway, else. Cool, thank you, Scott. Any thoughts on that, Alex? Well, it's interesting. I don't know if I completely understand the way Scott's using the term mind here, but I think I kind of can see something that deserves to be called the mind would emerge through this sort of learning process. So I guess there's a trivial sense, I think, in which anything that has energetic properties has a mental side. That's kind of what I'm saying in the paper. But it's trivial. And if you want something that really functions cognitively, then you need to have internalized a lot of these regularities through the sorts of learning and infant schemes that we're familiar with here. So if that's what you mean by learning the mind, I think Scott, you analogized it to the environment at some point, or maybe it's more than an analogy. It's like the source of the data, sensory data that you're interpreting. So yeah, I think it's a cool idea. Cool, Steven, welcome, Lou. Steven, with a question, and then anyone else with a question or Lou, you can say hello. Yeah, one question I'd be interested in is, do you see maybe more than one type of information processing that may be happening at different levels? So maybe you've got, because we've had people talking, say, at the quantum level, which could be happening, particularly say at the photons coming in with the iris and things going up, and then you've got other levels where you've got these steady states going at different rates. And then you've got some of the things you're talking about where maybe it goes into this sense of a more averaged closed mind idea, which would have to be somehow created maybe, because the dynamical nature of the data and the noise is not necessarily exactly how it is when it first arrives at the organism, but maybe you're bringing back some of that more averaged out heat ideas around how energy can be averaged over time in some sort of heat form. Is that, would that be accurate? I'm not sure about the heat piece. I'm just genuinely not sure. We can talk more about it, but like as far as multiple types of information processing, I mean, I think that's definitely what's happening in the brain, of course. There's all this literature on multi-scale active inference and stuff, which my paper isn't directly about active inference, but I think there's no, so we tend to signal out certain processes in the brain as like processing information, but really that's going on fine grade levels and on larger scales, as you say. I think we tend to just pick out one piece to analyze at a time. I guess the part of the question I don't know how to answer is what are the types of information processing in question, so I guess I'm skeptical of that type distinction, not against it. I'm just not sure what kind of types we have in mind, but definitely I think there's a role for averaging just in, if we even, if we just stick to neurons and don't worry about underlying chemistry or physics, like it can't just be that one neuron inspiring represents your grandmother because then if that neuron's shot then you can't represent your grandmother. So I think there has to be, if we want to associate content with brain states, for example, those states are going to have to be couched at the level of averages in some sense, or some kind of statistical property, and I think that comes up clearly also in the work that Carl Friston and Vanya and others have done on the whole Markovian monism idea, so they talk about the fact that it's not individual brain states that encode information on their view, it's the mean of a distribution, a conditional distribution. Anyway, going a bit into the weeds on this, but I do think that that's important. Heat, I don't know, let's talk more about it. Yeah. Heat, we had the what is temperature slide, which I'll just switch to. Yeah, wait, Steve, do you have any other thoughts? Well, yeah, no, go to that slide, and maybe just a sort of following from that is maybe something about the mind and the level of the psychophysical, maybe actually the mind is kind of a psychophysical level, as much as we think of it as a brain level, but maybe what you're pointing to is something a bit more of a dynamic, which is kind of interesting. Yeah, I mean, I generally, sorry, I generally have trouble seeing distinctions between levels. So maybe that's what you all could help point out, like why these things are not the same, and it would help me. Go ahead. Hence the psychophysical identity thesis paper that you wrote. Right. Because that's the entire topic of this discussion, is what are those relationships and are they like each other or are they overlapping, are they isomorphic, are they one and the same? So here on this temperature slide, you wrote about how increases and decreases in temperature change the variance of distributions. So maybe just what is temperature or in this psychophysical identity mapping, what does temperature reflect? Because changing the temperature on the brain, you're not just talking about going into a warmer room. You're talking about something related to inference. So what is happening there? Well, right. I mean, I guess I am saying, like, look, if you change the temperature in the room and that ends up changing the temperature of the brain, then it's kind of an implication of the thesis that you are changing the temperature of your statistical model as well to some degree, right? Probably usually to an imperceptible or insignificant degree, when something like walking into a different room, because the brain has its own internal homeostasis and it's not like we're going to drastically directly affect the brain's internal temperature by changing the room temperature one degree. But I really am speculating, or I'm considering the consequences of, I mean, the overall theme here and temperature is a special case, but the overall theme is that anything that you want, like, so any of the terms in the home holds free energy from physics, from thermodynamics should directly, this is a transparent encoding idea, should directly sort of be reflected in the cognitive dynamics. And so the cognitively speaking, what temperature means, like it says here, is just, you know, temperature parameters in models, neural network models and other sorts of models. I think I've heard this referred to in the active inference literature as the shaky hand parameter, right? So it's like, you're reaching for something. If you increase the temperature parameter, there's less certainty in exactly where the motor system will end up, right? So that's what it means cognitively speaking, or if you don't want to call low-level motor control cognitive, then in terms of these models of organisms, what it means is just literally just the variability of the distribution, right? The variance. So I guess what I'm saying is, yes, I think if this thesis is true, then what encodes that is the actual temperature of the system that implements the model. Very interesting. Thank you. We'll go to Blue and then Scott. Good morning. I'm Blue Knight. I'm a research consultant based out of New Mexico. And I really enjoyed this paper. I really enjoyed the possibility of having a way to separate the mind from simulations of the mind. I think that that's a really relevant and pertinent idea to put forward. But so in relevance to temperature and this is going to totally throw what I just said out the window, you know, I dabble a little bit in retail investing, right? And so the day before or maybe in the morning, you might like what's called take the temperature of the market, right? So you see kind of things going up or things going down. Like you just kind of put your feelers out there and take the temperature of what the market's going to do. So I just wonder in terms of cognition, in terms of active inference, if there's a way to take the temperature in maybe like more of a metaphorical sense, right? Does that make sense? I'm not sure. So I can elaborate a little bit more. Like in the political system, right? Like we get like a temperature. Like is this like a liberal environment or is it like a more conservative environment? And that might influence our actions, right? Like the active inference loop might be influenced by the perception of temperature in the environment. Right. So yeah. I mean, first of all, hi. Thanks for joining them. You and Dean and Daniel, I really enjoyed your discussion in point zero. And it made me excited to talk about it today because it's cool how just how many different questions or you know, anyway, whatever it's cool. So I mean, sure you could use temperature in a metaphorical way as well. Like I think, I guess in this paper, I tend to assume that when people that metaphors are not metaphors, but identities, right? As the ground level assumption and then see where that you're forced to distinguish between them. So like if we're talking about, again, if we're talking about a model of motor control where you're literally reaching for something, then I think in that case it's plausible that there's some relationship between the actual temperature of the pieces of the system that are driving the motion and the temperature parameter in the model. But like if you're talking about something like taking the temperature of the market or, you know, the political, there still might be partial mappings there. Right. Like, but you know, if you're defining some arbitrary dimension, so if the temperature politically means one side of the scale is conservative and the other side is liberal, then I wouldn't expect that there would be some physical component to that. But the same way that your paper was, as you said, just putting it out there and not endorsing the psychophysical identity, it's almost like an iconophysical identity thesis would be asking whether the thermo-like dynamics of markets actually have some sort of mapping that's more than a metaphor, but it's something that can be stated and blanketed or boxed off and then can be understood as being adequate in certain situations, but maybe inadequate in other situations. But when we talk about the market heating up, people know what that means in a way where, you know, the market feeling happy doesn't really capture it heating up or cooling down. So Scott, and then anyone else with a raised hand? Oops. I'm not sure if I should try to share my screen. I may just screw up, so let's not do that. But let me, so I just pulled up my slides from 2013. And so I said under the first law of thermodynamics, temperature has two roles. First, from the perspective of a party outside the system, temperature tells whether two or more systems are in thermal equilibrium, same temperature equals equilibrium. Okay. From the perspective of a molecule in this system, temperature tells about the distribution of various molecules in the system over the possible energy states, right? The higher temperature equals a broader distribution of molecules in different states and a higher Boltzmann distribution. So what I'm asserting in this context, in the context of those earlier slides, was it analogously here, are we looking within the system of the cognitive system and the dynamics of components within it, or vis-a-vis an outside voice for that system? I think that's a relevant question here, because when we're talking about the first law in here, we don't, we have, this is Maxwell's Demon is actually a person now. Sorry, that obscure reference. But Maxwell's Demon was a convention to, plus with the second law of thermodynamics, a gating function. And now we actually have the notion of what does a molecule in the system think, in a sense, is what I'm inviting here. And so I think the assertion, assertion certainly I'm making in that context is we can capture either one of those dynamics and it's infodynamics, you're not thermodynamics, is what I call it. But essentially, first law tells you about interoperability is what my assertion is, because you could have equilibrium in a symbolic way. I assert is the equivalence of interoperability and symbolic. And so that's when you're gonna have a beating of the minds and we can have shared consciousness society, shared language, all that kind of stuff is my notion of where the first law takes you. Thanks. That's very intriguing. I need to read your work on this, please. Yeah, I mean, not to give a two hand wavy answer, but I think since we're dealing with multiple scales here, we could talk about neurons performing active inference, people doing it. I think both of those sort of uses of temperature are relevant. But we can say one. Yeah, we'll get to Steven, but maybe just a general question. Where does active inference come into play and how does it come into play? Oh yeah, so, right, so you guys raised in the point zero that I didn't really talk about active inference that much. In fact, I only referenced the literature on active inference. So this paper isn't really about active inference. It's not really directly about the FEP either. So thank you for having me despite that. But it's, I mean, obviously it's in the very core. It's concerned with the same core issues, right? But I don't want to take us on this if it's a tangent, but the reason that I don't write about active inference that much is that I think from a high enough or meaning abstract enough point of view, we don't need, I don't know that there's really a difference. To me, active inference kind of relates to the stuff, the way that like Dawkins' extended phenotype proposal relates to more traditional ways of thinking about evolution and genetics in that it's a new paradigm. But I don't think that there's a system that you can describe using active inference that you couldn't also describe using, say, traditional representationalist terminology. So that's obviously controversial and we could talk about it. But like in so far as you're concerned with analyzing or designing a system, like, right, even if you want to write an active inference agent as a model, you only have control over its internal states. You can't directly control. It needs, yes, it optimizes its situation by acting and intervening on the environment, but you can't directly control how the intervention on the environment plays out. You can only control the expectations for sensory data given that you've acted in such a way. And so that's why I tend to just default to, you know, it's what Jerry Fodor called methodological solipsism, right, which is a very different perspective from what people, I think, I love to talk about this. Maybe they're more similar than I think too. But anyway, so like active inference is great. I think in fact it leads to new ideas for technology and new models just because it's a different way of thinking about things. But when you're doing like philosophical analysis like this, I don't find that it's, for me, often necessary to think in those terms. But I would love for someone to point out something that I'm missing by not doing that. Well, what we love here in the active lab is hearing everyone's perspective on active inference and also adjacencies and synonyms and other languages and other ways of expressing it. So thanks for sharing it the way you did. Steven, if you wanted to. Okay, yeah. So sort of building on that, you mentioned the Helmholtz free energy rather than the Gibbs free energy. So you sort of the broader sort of energy that's available. The, I suppose a question I've got here is in this, in that, actually go back to that diagram just one sec with the distributions. There you've got just, you've got both the difference in the sort of the size of the distribution for the different sized molecules. But you've got the average speed is higher for hydrogen or oxygen. It's faster. It's the speed is faster for something that's lighter. And that may have an impact at the local level when the actual atoms are hitting each other at the same temperature. But when you take the average, it's all around the same energy, if that makes sense. So I just wondered if there's any particular sort of relationship to the way that the speeds are happening and the way that might change something at a certain scale and just Helmholtz and Gibbs free energy. Yeah, so to be honest, I mean the reason I wrote, I write about Helmholtz free energy is just because it's the thing that Jeff Hinton and others used in their, when setting up things like the Helmholtz machine, the Gibbs free energy I'm just less familiar with, it's it adds complications that I thought weren't relevant. But if they are relevant, then let's talk about it. Could we, yeah, could we maybe hear a little bit from you? Just what is free energy in the senses that you used it? Or how would you describe it to somebody who might be not very familiar with thermodynamics? Right, someone such as me, for example. So what I would say is, well, I mean I'm a philosopher, but so free energy in a physical and a thermodynamic sense, as I understand it, again, I welcome any things in the other direction. But as I understand it, it's just the energy that's available for doing work in the system. So it's the internal energy, the all the energy that the system harbors just in virtue of its configuration, its sort of existence at that moment. Minus all of the energy that's not usable because it's tied up with sort of this disorder with entropy. So I'm sure that someone else here could maybe articulate that more gracefully than I just did, but that's the one term. And the other term in the equation here is the variational free energy, which is just the difference between it and basically the generative model or the joint distribution that defines the system's generative model of some kind of statistical model, more generally, and the approximation to it that you've arrived at at a particular time step. So if you're doing variational inference, you're trying to fit this Q distribution, which I mean, you know, people, this has been talked about ad nauseam, so it's hard for me to get the energy to really launch into it because I feel like we all are familiar with this concept, but it's just the difference between the approximating distribution, the recognition distribution or density, and the generative density. So I guess the thing to underline there is that it's only defined in terms of these P's and Q's, right, Q recognition P generative that are defined in terms of the generative model, which is a model, right? It's a mapping between hypotheses, right, or hidden states and outcomes. So there's obviously no a priori connection between that and thermodynamics. So it's just important to keep in view that one of these is in terms of how it's defined is thoroughly cognitive, has to do with belief updating, and the other has to do with the physical states of the system. And that mapping between the cognitive and the inferential and the physical is, again, the entire discussion and the entire point of putting out an artifact that is critiquable or is extendable in the ways that you made it. So Scott, then Stephen, and then also anyone else who's watching live can just put a live chat in and we'll ask the question. So Scott, then Stephen. So this is really what a great conversation. Thank you, you're on a great paper. So a couple of things. First of all, I wonder, we always talk about a being by analogy and it's not really thermodynamics, but von Neumann said to Shannon, use the second law thermodynamics for your quantitative theory of information because, and his quote, right, his quote was, because the math is identical, which tells you something. It at least tells you something about math and human. But may not tell you anything about the environment, but it tells you something about math and humans. But he also said, and if you use the word entropy in any argument, you'll win every time because no one will know what the hell you're talking about. So that's good advice on both of those points, right? So I always look at it and I go, okay, so let's talk about those differentials. And that free energy question that we just answered, I, so ideal in policy, I'm not a biologist, I'm a lawyer, okay, and a physics lab. And the reason I'm interested in this stuff is I think you can access the external states. And in fact, I think that's all we do as social beings is we, that's what rhetoric is. I am constantly trying to lasso external states of everyone else. That's what politics is. That's with all the whole misinformation problem right now. We're setting out, now in a biological example, we kind of don't send around memos as much. Now there is, there are communications obviously in the sciences and in the nature and all that, but when we get into the human populations, we may have something more going on here. Maybe mind is an emergent phenomenon merely of the attempt at harmonically coupling with other minds. Maybe a break, if we're successful in tuning into that local mind, we survive. If I am dropped into France, and I don't speak French, then am I going to survive? Now, now I will, but what if it was 300 years ago or whatever, right? So you need to, if the mind is feral and is trained, maybe that ability to affect the externality is everything. Maybe that's how we are humans in the way that we are, because we're pretty pathetic little organisms on our own, each one of us, right? We don't have sharp fingernails, and our tooth and claws not, we get bloody, but it would take a lot of biting because it would kind of blunt, right? So anyway, that's a bit of a tangent. But so the question is, are we, maybe if we look at the, maybe we're not entering another brain, maybe the mind is where the action is, and when we're doing this in training, maybe we're merely tuning into something that exists already in our attempts here at this active inference, and we're just getting on board with the existing mind. And so therefore our steady state, our homeostatic state is the one that is most aligned with the externality that's drawing us there. And as lawyers and as policy people, we create artificial externalities. That's why I use synthetic intelligence in our title. We're creating synthetic externalities and saying, hey, come on over here. We got a good externality over here. Check it out. This one's resilient, sustainable, and the SDGs of the UN love it, right? So we're creating realities. What do we think about that in terms of the notion of the efficacy of the possibility of harmonic coupling with other minds and that as another active activity and active interest, not just changing your internal model? Yes, that's extremely interesting. So I mean, I think this question of how exactly the external states get into the picture is like pretty fundamental for a lot of people who are in this area. And I think you're right that it's essential, but it's essential in a strange way where it's completely inessential too. So it's essential in that, yeah, the reason that your steady state is what it is essentially involves the environment, right? So like in the free energy principle, the NESS, the non-equilibrium steady state is defined with respect to this sort of a subsystem that's embedded in an environment. And that steady state wouldn't exist without both pieces of that, right? And it's the fact that you're doing gradient descent on that that guarantees that your variational inferences is going anywhere useful. And that's sort of the sort of magical seeming connection between actual accuracy and all of this just churning that happens when you're minimizing free energy over time. But the way in which it's inessential is, of course, I mean, I just, I guess maybe I'm maybe, I've been called a Cartesian before, maybe I am, I don't know, I didn't mean to be, but there's a sense in which simply just the datum here is just that I can get things wrong and I don't see that there's any obvious way of limiting the scope of that, right? So just fallibilism. So I think there has to be some subset of states of the overall universe that you'd identify with you and your perspective and otherwise, right? You have, sure, the external states are essential to the story and they drive the dynamics de facto, but there has to be a difference between the actual external state at any moment and the external state that I represent via my hidden states or my internal states. And so I think, you know, like the reason any of this works in practice is because we've internalized the structure of the external environment and this whole point about harmonic coupling between minds is really cool. It's, there's something really cool and bootstrappy about that. It's like, right, minds are things that harmonize with other minds, right? And I don't, just to kind of pick out another piece of what you said, the way that I tend to think of this stuff is in terms of sort of imitation. So the fundamental term for a lot of people in this area is prediction, but I almost think we just learned to, our brains learned to imitate this sensory, right? The patterns of, the processes that produce patterns of sensory input, but the reason it's not just imitation is that also the world is just made up of a bunch of these generative models that each have their own driving, right? Signals, so I'm not sure, I don't know. I don't know how to tie that up in a nice package, but it's a cool set of questions. And so the reason though that I stopped sort of saying that the external states are as fundamental in understanding systems, biological systems, people, et cetera, as the internal states is simply things like Cartesian skepticism. So we could talk about that. Yeah, go ahead, Scott. Just to follow up, well it is interesting because it's, because our struggles in it, when I say on this group very often, the only time you see reality is when you see paradox, right? If you're seeing an absence of paradox, you're intrinsically seeing a model of some sort. And so we should embrace a confused state and know that that is a reality. And so that is not just a statement for this conversation, but in general, maybe what the mind is doing is the same thing that the body does when it's standing still. You're actually never still. You're always dynamically moving to maintain a semblance of stasis in a banded stillness, right? So maybe we have a banded mind. Maybe it's fully dynamic. And maybe what we think of as us, an existential kind of element of it, is within a band. Right? If people change over time, it's a ship theseus in a way, right? You're still the same person, but some of the stuff has changed out, swapped out over time. So maybe the embodiment helps us with mind as a fractal way at any level, because we can say, okay, if we have a system that's doing processing of dynamic environment, some of that processing may be internal, some may be external. I mean, that's group formation. And you know, part of it's genetic and consanguinous in terms of performance. And some of it is people who are similarly situated, right? If you're in a disaster and everyone's all of a sudden has the same problems, people band together all of a sudden very easily and their mind morphs into that organism that is needed for that acute situation in that stressful environment. And so maybe it's always dynamic, but that we don't see the dynamics as much and they seem more leisurely when we're studying them in the abstract, but maybe they're highly dynamic in both directions, especially as we've become information beings and more dependent on symbols. Thanks, Scott. Just one thought on the dynamics is anyone who's recorded a neuron or thought about that knows that just to keep an idea in your mind for a few seconds is being underwritten by vast numbers of dynamical processes that are quite rapid. And the second point about the internalities and the externalities and the internal states and the external states, the Helmholtz free energy has this U internal energy. And that's like the configuration of the molecule usually thought of as and then this negative TS is sort of that those vibratory modes that can't be as easily harnessed. But when you pointed to synthetic externalities, which is the same thing as saying reconstructing the internalities, it's like what if all of a sudden the organization of the system, we could partition it a little bit differently so that we had more U and a little bit less TS. And when you think about a vibrating molecule, U makes it look like the covalent bonds and TS is the vibration. But if you think about an ant colony, the movement of the pieces is still part of the internal dynamics. And so especially when we're talking about information, which isn't covalently bonded atoms, internal energy is something about the model. It's not just about how things are touching and can be broken and reformed. So there's a lot of awesome pieces in what you added there. Stephen, and then anyone else with a raised hand? Yeah, I think building on that, you're right, there's this, I think you're making the right choice here for this paper using the Helmholtz free energy. I think that's a, even if Gibbs free energy is used more in active inference, that adds an extra level of complication. And I think this is actually useful then because you're able to look at some of these distribution questions and just how information is extracted. This is actually helpful for me to see you go back to the Helmholtz because it makes me realize, well, the Gibbs free energy, which is more about how action happens, how do you know what the internal energy change would have to do if it was trying to also keep within a certain constraint of volume and pressure, which you don't have to worry about with Helmholtz free energy, that would bring in action. That would bring in the action part of active inference in some ways. But the underlying principles still hold for what you've got. So I think that's helpful. So I don't know if you've got any thoughts on that. Yeah, that is helpful. I would just say that that is helpful. And I would love if in the future, anyone worked on this same topic and unpacked the sort of, I was going to say generalization, but it's maybe the opposite. How this would work out in the case of action more specifically and using Gibbs free energy and stuff, that would be really cool. Awesome. And taking the temperature of a person in terms of going for a checkup to the doctor and they put the thermometer under your tongue and that's a thermal temperature and how are you feeling? What does it look like to take the temperature just like Blue had been suggesting? So maybe let's go to the transparent code slide that you added, Alex, because that was definitely something that we were curious about from the dot zero. And we can just hear your thoughts on this, talk a little bit about this, and then also anyone else can just write down a question, raise their hand or ask in the live chat. But we'd be really curious to hear what you thought about this. Yeah, sure. So yeah, so the reason again that I put this in was that Dean and maybe everybody in that point zero discussion had questions about this. And I think the transparent code, it ends up sounding like a slogan that invites speculation and it's like a mystery or something. But this might be a case where I'm trying to say something so simple that it's easy to miss, right? So let me just first frame this. The role that this plays in the paper is that if you want it to be that variational free energy and Helmholtz free energy in the thermodynamic sense are the same quantity, then you're going to have to explain how, well, both of them are sort of made up of probabilities, right? So you need to have a mapping from one set of probabilities to the other. So there has to be a way of understanding how the probabilities of states, this is the big distinction in the literature, right? How the probabilities of states could represent, could represent. So the probabilities of states in the system, the cognitive system that you're analyzing, which is also a physical system, how is it that those could represent the probabilities that are assigned by that system to the external states, right? So the idea of a transparent neuronal code or any code is just that the probability of, for example, so on the left hand of the diagram here, if we had a transparent code in the sense in which I'm using that term, then, for example, the probability of B firing given that neuron A fires, right? So if B is post-synaptic neuron and A sends a signal to B or A fires and there's a certain probability of B firing subsequent to A firing, that would represent the probability of some state of affairs Q obtaining given that some state of affairs P obtains where A, and this, we can talk about where this mapping comes from, right? But the assumption is A represents P occurring and B represents Q occurring. So that's really all I meant by a transparent code. And it's not, it's a simple idea, but it's definitely controversial, right? That would mean that if you think that there's a certain conditional probability of the cat being on the mat given that the cat lives in the apartment, then the probability that you would represent those things has to be tied up with the probability that you assign to those things occurring. And that's not at all a sort of uncontroversial statement. But it's a simple idea. And this is an idea, by the way, again, that's used in some of the early connectionist literature on Blue mentioned Boltzmann machines, right? So there's an early paper on Boltzmann machines where this kind of stochastic code is proposed. And the basic, one more sentence on this, the basic idea is that the objective probability, sort of measurable probability of events within the system maps onto the subjective probability that the system assigns to the states of affairs that are represented by those internal states. Interesting. And in the context of the cognitive system of interest, we're talking about the brain. It's like, we know that we're not going to get the transparent coding from the photons hitting the retina to what is the probability of the cat being on the mat. So with this multi-level processing, as you've kind of hinted at with the multiple layers here, it could be possible that at some neuronal stage or at some level of processing, there is a pattern that maybe represents the cat being on the mat probabilistically, which also invoked the whole representation questions, which we had talked about months ago. Right. So it's so fun to see how you delineated where there's controversy, sometimes even both sides of the discussion around a controversy. And then it's like, great, here's the artifact. Let's see which way we're going to push it. If there turns out to be more evidence for transparent coding, then we'll push it that way. And if there turns out to be less, then we can push it a different way. So Scott, if you're there, I don't see you yet. So Steven first, and then, okay, Scott, go ahead. And then Steven. You just wanted to mention there was an article in Nature a couple of years ago where there was an assertion made that consciousness, this is what got my eye, consciousness they asserted arose 520 million years ago. And I was like, well, that's interesting. What's the evidence? And what they found was some structures for, I think they were proto-nodo cords or something, but basically it was a ganglia kind of thing, a secondary processing. And their assertion was that because they found physical evidence of a neuronal structure, again, in fossils, so it was some structure that he apparently was able to be preserved in shale or something. But because it was secondary processing beyond the eyeball or the ear or the perceptual apparatus, that secondary processing, they suggested, meant there was consciousness because that was their definition. But it was interesting, the reason I raise it here is it's interesting that notion of embodied. So we're talking, again, let's talk about reaction chambers. We have their stimuli all over the world, there's things happening, and then we have perceptions, ability to stimuli in our, afforded to us. And so that comes in, but we don't have the structure of it is TBD. And so then the organism embodies morphologically certain structures over time, eyeballs and nodo cords and ganglia and things like that. And those themselves are information structures, they just haven't even embodied in the morphology. So it's morphological computing for biological organism, because they now can do that thing. I can't see an ultraviolet, but bees can, that kind of thing, or infrared, that kind of thing. So I just wanted to raise that idea, that assertion that offended me originally, that consciousness arose 520 million years ago, is not so offensive in a way, because what they're trying to do is grope at the notion of the layering on of morphology and embodiments, and say, look, here was a ratchet, here's something happened here, and we think, we don't know how much more information, it's like Julian James, when he went back and did the origin of consciousness and the evolution of the bicameral mind book in the 70s, J-A-Y-N-E-S, and he was asserting from a literature search, that the Wernicke's area flashed on at one point, it was equivalent to Broca's area, language center in the other side of the brain, and his literature search, he looked at Joan of Arc, he looked at the Iliad, and he said that morphologically, literature embodied that notion of the mind actually changing, ancient literature, you can see the mind turning on an internal voice, and so I just wanted to raise that notion, those embodiments of the external and internal states lying around all over the place, right? And so your paper does a nice job of inviting the treatment of those, or those as candidates, for looking at the mind being assembled, particularly if the mind is external to the brain, and the brain is just an antenna tuned into the mind. Thanks. One funny little synchronicity there is that Julian Janes wrote the book that Scott mentioned, Origin of Consciousness in the Breakdown of the Bicomeral Mind, but Edwin Janes was a very famous thermodynamics researcher who introduced the maximum entropy interpretation of thermodynamics, so related like we all are by those estimates. Were they related by the blood? Aren't we all? Yeah, come on. Stephen, and then anyone else. Yeah, just sort of coming to this diagram that you've got here, so I think one thing I'll be interested in, and as you say, neuronal or other internal systemic states, so you sort of give a little bit of latitude to dynamical aspects of that, goes to cognitive states of the system beliefs, and I'm wondering how much you're tied to those beliefs being statistical, computational, sort of separate things, or how much is it beliefs on action policies? I mean, I'm not pushing active inference again, but it's fun. Yeah, what's your thoughts on that? Yeah, I mean, there's all sorts of questions here about how we individuate representations and such, but I wouldn't think, I don't really think of these as detachable, separate entities at all. So that note is labeled P, but I'm one of these functionalists who thinks that the label itself does nothing. The label is a useful indicator for people outside the system to understand what it represents, but the representational function is just the role that that node plays in the network. And I think that it subsumes and includes connections to action. I just tend to think of connections to action as the special case where what you're generating, what your generative model is generating, are states of the motor effectors. So I just, and I think any kind of planning that the organism learns to do can also be understood as learning sort of sensory motor contingencies, which will influence how states of the motor effectors are generated in tandem with states of the perceptual system. How perceptual biases occur. So basically, yeah, I don't think that there's any, like these don't need to be cognitive separable features of the, these don't need to be separable features of the system. They're just like, when I say that a state represents P, because it represents for me has sort of a functionalist meaning, I just mean that's just sort of a label for a set of ways in which the system can behave because of its internal structure. So including sort of internal dynamics, right? But it's really just a label for a property of the dynamics of the system. Cool. And also beyond the action being at the gross organismal motor level, potentially internal representations or states could be like hormonal or neurosecretory. So the neuron get fire, but neurons have other affordances, like the little micro motor behavior with the motor proteins, like releasing a vesicle of a protein that's been transcribed and translated, also tiny little motor behaviors of nested smaller agents inside of that neuronal interface. So Stephen, and then anyone else with a raised hand? Actually, just what you were saying there, Daniel, does that mean that you can almost release hormones through the brain? It doesn't have to be like through a gland in the body, but you've got smaller scale releases. Neurosecretory cells and neuroendocrine biology are large areas. So yes, many kinds of cells pull in and secrete molecules. And neurons are of various subtypes, secreting specific hormones, neuropeptides, all kinds of hormones and signaling molecules, short range and into the circulation. Got it. Okay, thanks. And I let that process and that's really useful. I didn't know about that. And I suppose just adding on to this, when you mentioned about sensory motor contingencies and being able to... So I'm wondering that the scale of consciousness and things entering into consciousness or the phenomenological sort of awareness, do you sense that there may be, even if active influence is underwriting it all, other types of... Like it doesn't have to be as pure active influence at that scale. And it could be working with other types of sort of architectures or process architectures. I mean, well, one thing is that consciousness... So I think Scott brought up consciousness for the first time here. That's really interesting. I sort of deliberately didn't say anything about it in my paper because I find that it's a very thorny topic for me, of course, for many people. It's not notoriously not an easy problem or maybe it is and it looks like a hard problem. But anyway, yeah, so I'm not sure. I do think that all the cognitive dynamics I'm talking about are probably more fine grained than what shows up in consciousness. So I wouldn't say that there's some transparent mapping from whatever you believe in the sense of belief that's implicitly defined here and what you are consciously aware of. So there's all sorts of theories of how consciousness relates to this stuff. There was a second piece to what you said that I wanted to address, but I don't remember what it was. Well, I suppose it's a scale, but I think you kind of addressed it a little bit there. So Daniel, were you going to say something? Yeah, we'll come back to it if you remember what it was. Yeah, Scott. And then anyone else with a raised hand? Yeah, I wanted to go back to the notion of neurotransmitters for a minute ago when you were talking about endocrine neural function. A couple of things. There's a couple of books here I just wanted to hold up. There's Cognitive Archaeology and Human Evolution, kind of an interesting notion there. There's a couple of authors there. And here's the other one, Archaeology and Memory, kind of an interesting one. The reason I raised these is kind of interesting when you talk about neurotransmitters, I remember reading an article a few years ago that suggested that among all the animals in the animal kingdom, now the animal kingdom is an old representation, Corollus Linnaeus thing. I get it, but in any event, there's only one animal that doesn't share any of the neurotransmitters with the rest of the animals. It's like the box jellyfish or some kind of jellyfish, I forget which one it was. And I thought that was interesting just when we were just talking about a minute ago. Daniel was noting about neurotransmitters. Well, again, it's morphological computing. Those are artifacts of earlier solutions. They're practices of processing of information by these organisms, right? And so the fact that we can go back and say, wow, I wonder if these things were related, given the fact that they don't share any neurotransmitters. That's kind of an ethereal element. It's not something that you get like bones and doesn't fossilize well. But isn't it interesting that we can find the morphology of neurotransmission in current organisms and use that to understand lineage and relationship? So in fact, it's a morphological artifact. And so it's not epigenetic, right? But it's interesting the relationship of the homeostatic state of thinking or cognition of a certain line of organisms and the morphology of its neurotransmitters. I'm interested in use social insects, for instance. Do people do profiles? If anybody here knows anything about use social insects, please type in. What's the story with neurotransmitter analysis among use social insects? Is that a special neurotransmitter especially invoked? And is the morphology of neurotransmitters, perhaps, where the organism resides more than other physicalities? Thanks. Great question, Scott. So first on the TNA4 point, and then the use social. So the TNA4, indeed, it did suggest new evolutionary hypotheses when people investigated which neurotransmitters TNA4s used, and they convergently utilized some of the same simple charged amino acid derivatives, but they used them in different ways. For example, glutamate is not used in muscle contraction in vertebrates. We have acetylcholinergic receptors at the muscle. But glutamate was used, which supports Alex's point about functional representation. There's nothing that says glutamate means muscle contraction. It's a functional context with a receptor and a muscle cell in a neuron that generates the contraction. And potentially for a later day, maybe when we discuss active inference, the paper, but we've done multiple studies measuring neurotransmitter metabolism and gene expression in ants. And there's been a lot of work on honey bees as well. And just the one point I'll make here, since it is a large area and people can look at the papers if they want, is that the neurotransmitters that are involved in the regulation of foraging in ants and bees are inherited from the neurotransmitters that regulate foraging in solitary insects, like dopamine and octopamine in fruit flies, where it's been investigated in the context of solitary foraging. But the inputs and the outputs of foraging for the use social insects are different. So whereas an interadresophila, it's like internal hunger signals, representations, maybe some sort of neural hormonal encoding leads to foraging behavior. And then that results in the acquisition of food that reenters that bodily system. But an ant forager is foraging essentially food for the larva, a different part of the colony organism. So the inputs, like the stimuli that lead to induced foraging behavior and the outputs like where the food goes are different. But the foraging motor behavior is conserved. And it's that rewiring of the motor motif of foraging via the same neurotransmitters octopamine and dopamine and others that actually enable the colony life cycle. So it's a really awesome area. I hope we can talk more about how that relates to active inference. So Stephen and then Scott and then anyone else who raises their hands. Yeah, I'd like to ask a question about which sort of links to that. It's a kind of a semi theory I've got. But is it feasible with this that the brain like an organism basically the cells that become the brain effectively, they mutate and they transform into the unique form that the brain and the nervous system basically like another organism growing into the main organism, right? So you've got two organisms once they differentiate and their brain is like the control and the nervous system is like the control organization growing into and with the other the fascia of the other organism when in a human's fascia, but obviously I just wondered if that would be plausible and that that would do anything with this brain organism boundary question that I know you are kind of connected to in terms of wanting to maybe maintain more of that potential. I don't know if that's something that other people are talking about or whether it's a little bit out there and just biologically or from the sort of cognitive philosophical sense how that pans out. Yeah, I love the like biology and biochemistry and stuff and all this you know, wonderful fine-grained scientific life sciences stuff that you guys are bringing to the table on this. That's not as much where I'm coming from but I think it's really really cool. But so the idea that you just articulated sounds out there and plausible to me. So I think that like that's kind of what I would expect is that this same story can be told on multiple scales internal to the system, right? And so the brain learning sort of learning growing. I guess I've just been conditioned to think of all this as one type of process, right? So I said learning but morphological development and growing into its mature shape is a form of learning. But so anyway that happening and interfacing with the rest of the organism, I mean I definitely don't see any reason that we need to draw a like joint in nature around the physically right the organism that we can perceive there are going to be many similar joints that we could delineate within. So I think that's a cool idea. I don't know much about the details of that process so I can't like check that validate it but it sounds plausible to me. Thanks Alex. Blue and then Scott. So just to kind of maybe ask Stephen to unpack a little bit what he said. So I think in terms of like a brain organism like separation or duality like do you maybe mean like mind organism because as like in my experience it's like the like the brain is like the control center, right? Like I don't know that there's necessarily like a division between the brain and the organism but definitely like the mind is kind of like this ethereal concept. So maybe could you just unpack that a little bit? Well I was actually thinking partly thinking about the work of Mike Levine or Mike Levine and so in a way my understanding is that once a cell becomes a neuronal cell there's no going back like it's kind of like it so it's in some ways it's curious whether that whole structure be it from a small organism is effectively like we basically have two organisms in our body because our cells have differentiated into the bodily organisms and the neuronal ones and they've sort of they work together in this symbiotic kind of cybernetic loop. So that's the kind of thinking about how it pans out. So I just if I could just like speak to that a little bit. So the neurons can de-differentiate and do in like especially like in neural cancer types. So we do see some some de-differentiation there. It's not as committal as we've thought before. I mean it's kind of a budding field and the you know antique way of thinking is that there's it's a set lineage but there's there's ways that you can convince them to do otherwise. I think Mike Levin's thing is the bioelectric field right and so he separates the the bioelectricity of the organism as as maybe the overarching control center there but I've not seen like a neural like a neuron body distinction that's why I was just like what sorry. Well one thought on that and also to Scott's point about the emergence of a second level of the nervous system is to remember from developmental biology that ectoderm the outer layer of the developing embryo turns into neuro ectoderm which is the tissue that gives rise to the nervous system and gastrulation in the neural tube. So the nervous system developmentally neurons share an affinity and a developmental proximity to the external layer and so when you start nesting internal layers inside of that tube that's closed you start getting higher order nervous systems and then also the pituitary gland to sort of close this with a neuroendocrine discussion the pituitary gland is actually amalgamated from multiple developmental sources and so it's interesting that it has such a pivotal hormonal position integrating hormones and the brain's function and it itself is like a refusion from a tissue perspective of multiple branches of the ectoderm that had branched and then they reformulate a new organ which is the pituitary gland that serves a very coordinating role in organismal development. So Scott and then anyone else with the question. You know this is the best way I can spend my day with people who say the quote the pituitary gland is amalgamated from multiple ectoderm sources they're I'm in the right place man you know when you know this is the place for me I can tell you that hey um couple things let's go back to this part in whole and dynamic and cancer as an organism stuff for a second this is this is good stuff so you know the cancer analysis where they start to treat cancer as an organism in an ecosystem you know some of the framings to think of it that way a recruitment of a kind of chaotic kind of like having a drunk uncle stay at your house is one way of looking at cancer right is a it's a chaotic use of resources. So that is kind of interesting right because so we're talking and going back to Steven's point about the mind being different so I'm thinking of symbiogenesis here Lin-Margulis stuff and the chloroplasts in all plant cells in the mitochondria and all animal cells were once they're artifacts of once free existing bacteria right they migrated on board with eukaryotes because it was a better deal right more leverage more de-risking higher levels of engagement with the outside right they changed your internal physicality and morphology so they could think at way higher levels they could access way more complex levels of engagement than single-cell organism including the engagements they could get through sexual reproduction which introduced additional symbolic novelty so they could be more dynamically engaged with environments right clonal reproduction does not introduce a lot of novelty so it's hard to keep up with dynamic environments in your final this is something like intelligent design but it's not so it's interesting Steven in your notion of the mind being separate because in a way the mind is formed by the social environment you know I've told the story before my sister adopting a five-month-old Chinese child who is now 22 who is raised in central Pennsylvania speaks Spanish and English does not speak Chinese does not like Chinese food does not like Chinese music and basically looks like a Chinese person but is not so is that mind a Chinese mind the problem is we don't get have any control groups on the mind formation right once you're formed you are who you are and we say okay that's it but I'd like Steven where you're going on that notion maybe the mind is an independent quantity and again the brain is an antenna to the mind so maybe the body contains the antenna components again I don't mean a real antenna but sort of and it contains it needs to be tuned it's feral and so maybe what's happening is the tuning actually creates a mind maybe the mind is not resident in the brain in a child and maybe Bayesian inferences construct a mind thank you and to maybe direct it to Alex before we go to Steven's question Dean you really focused on this sentence it may be for example that biological systems run an algorithm closer to contrastive divergence than to simulated annealing so what does that mean or what what does that reflect on yeah indeed maybe you want to give a little extra info on that just a quick quick thing I want I want to hear Steven and I want to hear everybody else but I have a question in the back sorry I have to say this in the background I've got people taking the front of my house off so I haven't been able to do much other than try to control my ADD but I have a big question for Alex once we get through the the part that Steven and Scott want to have addressed and it's specifically to what you just raised so thank you great let's go to Steven and then we'll return to Dean and the contrastive divergence yeah maybe this will bring this together is I suppose what I'm just proposing is that there is some structural biological differentiation which is will have an impact in some ways and how we think about extending cognition but if we I'm very much of the mind body environment dynamical system kind of idea so I suppose I'm talking about there being a difference biologically but probably at the macro level I'm I'm trying anyway to formulate approaches which talk about a whole system and not anything else than that but I recognize that it may open up some other cans of worms but I know that how I wonder if that's something that you're does this does this annealing relate to the mind body environment dynamic in playing out within that or is is it a little bit more separate or different to that yeah I mean the the use I was making of this contrast between contrastive divergence and simulated annealing here is was just was kind of more specific I was just thinking of so annealing the process of gradually lowering the temperature and simulated annealing was a optimization technique that mirrors that process and so I mean I guess the answer to your question Stephen is I wasn't really thinking about system environment relations here at all I was just thinking of two different sort of internal processes and how that how they might be mapped onto onto these these learning algorithms and processes in physics but so so I mean I can wait to go on to that to take Dean's question or yeah or whatever maybe Dean let's let's go for the divergence yeah I I'm not predictable I'm gonna pull up the paper because I need to explain this so Alex in the paper you you you talked in in the subsection one two the role of internal energy um let me just find it here you talked about the inverse relationship between probability and energy is exploded in most uses of energy as a cost function and optimization problems so you made that clear um and then a little further down in the paper you talked about contrastive divergence which in equilibrium distribution of a restricted Boltzmann machine and then you got into a little bit more about the comparison to the simulated annealing and then you got into section two and I'm just going to try to find it here the ensemble piece so here's my here's my question if I were to bring a if I really bring three things and and cast the widest largest Friston blanket yet around your paper because I think that's what your paper does if I were to bring a Polaroid picture a sprinkler and an astronaut together so if I were to bring identity which is where that was the primary attractor of your paper to me identity astronaut that that distribution piece and um the first one the the Polaroid the isomorphism could you explain how you brought these things that are so seemingly distal into some sort of a coherence how how did when you were sitting down because I know you did this on a very proximal level but it it has far reaching implications based on the kinds of questions that Steven has asked the ones that scott's asked the points that blue and daniel have made it it doesn't just seem to be local it seems to be really distributed as well and so I'm just really curious you wrote the paper but now that you've kind of been able to step back from it and sort of take it in in a perspective that distance must have you really really looking at this maybe maybe in interesting ways because there isn't just the stuff on the paper there's the stuff now that you've kind of reflected on and seen the inversion and and wondered holy mackerel did I open up a can of worms or what did I what have I done I'm just I'm curious about that on a personal level right um well just I mean in general my the most striking thing about how this has played out um is that people are really interested um so blue mentioned this in the discussion and um I had emails about it too are really interested in the distinction between um genuine minds and like simulations um which was completely an afterthought when I wrote this it was just like oh yeah I better deal with the scope of this and so I should put a sentence about that and I did put care into it you know I put the best sentence I could and but it wasn't my focus but um yeah I mean the the the the thing I guess the thing that that strikes me about this so it was it was right I mean it was intended to be this it was it was intended to be a very fundamental like thesis that I'm considering here so I do think it has far-reaching implications I have no idea whether it's true but right if it did it would have at least far-reaching implications I guess what I'm um but I'm less clear on is what I'm still I'm still waiting for the other shoe to drop I'm waiting for someone to say here's an example of a biological system in which there's no way that you can map the cognitive um interpretation directly onto the physical interpretation because um if that's the case and if I think it's a compelling example then then I guess I'm wrong um so far so far that hasn't happened there have been some interesting suggestions um but so so I guess I'm I'm I don't know if I'm directly answering your question or Dean I'm just um to me the the the devil is still very much in the details and this is essentially a defense of a pop I was trying to I was trying to show that maybe some some a priori prima facie objections to this kind of thesis were not uh you know we're not compelling but there could be all sorts of details that merge that uh that that might question it um so I don't know if that really answers your question I'm sorry no and again it was kind of like when I was talking with blue and and Daniel in the point oh and I I brought up so I talked to people who were very much are in an inactivist camp and you know they want it they want to see that realistic and then there's the statistical camp and they really want to have that sort of top down where's the channel what's going on and I I just kind of thought what your paper did is it it opened both of those perspectives I don't know that they even have to necessarily merge or overlap I think the fact that we can we can use both and see both it's just a really interesting thing now I don't know whether that is disprovable or it should be I guess that would mean that inactivism is not a thing or statistical modeling is not a thing and yet we'd use these things so I I mean I guess they can be discounted but I don't know that they'll go away that's all I so I guess you have answered my question we can leave things open and up and not necessarily have them provide everything but but we can't just sort of discount something and throw it away until somebody as you said comes along and and says no right it doesn't work yeah and just one quick follow up on this this the one reason I wrote this paper is that I think what a lot of people find compelling about the free energy principle and the literature around that is the idea that there might be a link between physics and cognition right that there is a fundamental theory kind of emerging but it's I think it's very hard to pin down what the precise relationship it's it's hard for different people at different times it's becoming it's it's becoming clearer this has been worked out a little bit in the free energy principle literature but like it's difficult to articulate exactly what the the deep connection is between you know variational inference on the one hand and and like the behavior of physical systems so I just wanted to you know I wanted to throw out here's one version one possibility of what the supposed to be a clean precise brief statement of what that relationship could be so that you know yeah it's open-ended but so that it can be you know falsified just opposed to other other interpretations of that relation etc thanks Alex we'll go to blue and then if anyone in the chat or if Dave has a comment then feel free but first blue so Alex I think that this goes back to what you're saying in the very beginning about scale right so it's it's hard right now even to map like cognition onto like neural activation I mean I mean right now we don't know too much like too many components in the system are are murky for it to really be clear what's happening and I think you know as we talk about information going from a cell you know to from a brain to a neuron to cognition to action to bodily regulation and these kinds of things we don't really know is information compressed is it integrated like what is happening to scale a system up or down right I think that that's our hard problem yeah I mean I'm maybe I'm not sure what you mean by scaling the system up or down well in in this section you talk about the scale free nature of free energy dynamics and about how global and local free energy minimization could be understood as subsystems that are undergoing some sort of a akin process at their own scales right um yeah so I mean the thing that so I mean the thing that I wanted to react to and what blue said was the idea that we don't really know what's going on in the brain still it's very murky I mean I think that's true it's always relative right to some ideal of how much we could know but I mean what got me into this whole literature in the first place was was I was I started I starting again starting with the machine learning literature around 2012 2013 I started to feel like we actually were getting a grasp on on some of these principles how this works quite generally so like like the Helmholtz machine for example um that blue you talked about that in the point zero um I think that's a model now if the model is accurate then it seems like what's happening is as we you know as signals are sent from one neural population to another further from the sensory motor periphery the information is being compressed right so that's a particular hypothesis that there's a successive compression of information meaning that things are represented using you know lower dimensional um well vector spaces if you want to translate between vector spaces and neuronal populations right um and and that we have a pretty good idea right like this this sort of architecture explains so many things I mean that's this is this is what got people so excited about the you know predictive brain or the Bayesian brain type hypotheses is that there's just so many different things from different domains and psychology and neuroscience that this seems to this kind of perspective seems to unify um you know it's it's it's it's to be expected in a sort of Helmholtzian architecture that like you know uh you can always sort of endogenous endogenously cause the same sorts of states that you could be in as a result of perception right and that imagination and the internal generation of states has a particular role to play in learning and things like this so I'm kind of rambling but um my my my sense is that this um this sort of Bayesian Helmholtzian revolution or whatever you want to call it in recent years has has shown that we we do understand many principles of how the brain functions at the level of like at the human scale um if the question is about like how the how how other scales fold into this I um I mean there's some work that's Maxwell Renstead and Casper has been people like that have done on on understanding neuronal population dynamics in terms of active inference at that scale and so that's really interesting um uh so like what I would think of as a structural like a structural representation from the point of view of me or you like a human being you can also think of as a process of neurons performing active inference right at their own scale but I think there's to me the hard question there is how do you relate the contents of the representation at the higher scale to the contents of the lower scale that that's that's the thing that bothers me in this area yeah blue just if I could just respond really quick yeah that's exactly the the thing I mean we can see how this active inference is happening at different scales for sure but how scaling up and scaling down or or the the translation from one scale to another happens is very like mysterious still yeah totally cool so we have about 20 minutes left so if anyone has a question in the live chat otherwise we'll go to Scott and then Stephen and be preparing our thoughts for what to continue discussing in point two so Scott so um picking up in the last couple of points really interesting the I was thinking of it what was the gentleman's name who barretton who said uh information's the difference that makes a difference what was that guy's name is it um anyway the difference that makes a difference and I was thought the first difference that he's talking about is the very um a perceptual a perceptual change in the environment and the second difference is is that perceptual change relevant to you and so I want to unpack that a little bit here because contents one of the things that I was talking about my MIT presentation and one of the things I noted is Shannon tells us signal versus noise but I think but it doesn't tell us if the signal is a cancer cure or a chocolate chip cookie recipe right and so the content is left undefined there now what if so so I define living systems as autocatalytic entropy secreting structures notice I don't say they have to be embodied in any particular embodiment my son said the only reason I came up with the definition is I wanted to use the word secretion because it was juicy but he's right in part I wanted to make it juicy but the reason I raise that here is when you look at barretton or whoever it was it's better than sin anyway the difference that makes a difference um info information really maybe is what we cognitive beings call relevant differentials that we perceive irrelevant differentials are just noise and so the difference that makes a difference we call it information but there's a lot of differentials out there in the world and that's the reason I'm going there is since we're talking to a philosopher today and what I deal in as a lawyer is rhetoric and rhetoric is what you do to bring together incommensurables right and rhetoric is not like crazy glue it's like it's like um epoxy it doesn't just bring things together it also brings them together and fills gaps so the rhetoric of information is our storytelling about relevant differentials and we seek to sculpt our internal model and entrain other people's internal models because when we entrain other people's internal models with our rhetoric then it derisks our externality because other people's internal models are my externality right so the rhetoric is the tool that we use to reduce differentials or to increase relevance alignment among cognitive systems running cognition how's that I wanted to try to integrate that seemed to me the last couple of comments it felt like it may be information is an article when we put too much in seeking information maybe information the difference that makes a difference after the fact we can tell that because before the fact we can't tell what's relevant for instance suntan lotion never existed because we didn't know that photons cause skin cancer now there's an entire commercial factories and areas in the grocery store that sell suntan lotion now it doesn't mean that the sun became carcinogenic we just learned that it did so the inference the we were able to make that inference at large scales and now influence buying behavior which leverages and derisks future behaviors which is the reason that we care about information that's the difference that makes a difference anyway a couple of thoughts with this last couple of things that's that's really cool stuff I mean I I really see things similarly I think on a lot of fronts there um I think uh uh so the the question of it sounded to me like you're kind of saying like we don't have to be too fixated on this term information because it really it means differences that make a difference or being aware of differentials so I mean I I agree with that I think um I mean I again I always go back to just the the uh sort of bedrock stuff so like I think I think Shannon's interpretation of information was actually a little bit unnecessarily epistemic I think it could have been just doxastic so he says you know it's coming to know something it's like well but we could generalize that and just talk about what one comes to believe and then I think if we have if we work with that notion which isn't information anymore as Shannon defined it because he talked about knowledge but whatever that generalization of information is called I think that would really um conducts acetic logic thanks um yeah so um so that's that's all that I think we're getting at with the concept of information but what what I think gets really interesting and difficult philosophically is okay so obviously there's there's self-information right so there's just like something pings my retina and or you know something downstream from that and I notice a little blip in my visual field right and that's that's something I'm aware of so that's information about itself anyway but then the you know the more important signals carry all sorts of information about other things meaning they I guess they lead us to predict and expect other differences beyond themselves and so and so the tricky thing to me is okay so I take myself to be this system of interacting parts neurons and also other things obviously that um sort of embodies a perspective and the information that a signal carries depends on that whole system and how this signal impacts it right and and you know one of my neurons is a entirely different type of system here's my daughter has to tell me something what's up you got ice cream awesome sorry um sorry that's cool that's awesome um yeah so your generative model's on a live stream right now I'm sorry that's cool yeah um I don't know so I was saying something about neurons and then now I'm thinking about this cream um basically yeah so so the like that the question for me is so like I think of content right the information content of a signal for a person as relative to the the person to the system and so if you've got a completely different system with a different structure I think you can sort of compare the structures of the systems to see how similar the content is right but I think it has to be at that whole level of the sort of holistic entire system level that you need to make these content comparisons um so I don't remember whether that directly links back to where we started this but um nice yeah in one one quick thought on that is the the difference that makes a difference which is also a big key term in the philosophy of biology like the mutation is the difference that makes a difference for something um in active inference it's the difference in observation that makes a difference for either the generative model updating that's inference or the difference that makes a difference for action so we have a way to talk about it and then also um the doxastic note not my total area but it looks like this is how we can work towards pluralism across perspectives because if it's framed as one person believes that it's hot and one person believes that it's cold if there's only one truth and we're going to play tug of war then we have to fight but if it's true that one person believes it's hot and it's true that one person believes it's cold there's no problem there still might be differences on what temperature the people want the room but by framing things about what a perspective or a system or an agent believes to be true or just believes period could be an ironic belief all of a sudden we can interact with multi-agent systems and deal with them in a way that still has local logic so Steven and then anyone else so thanks just sort of closing out I suppose in a way um I'd be curious how this this work with beliefs that has a nice as as Daniel said it bridges action-orientated approaches and pragmatic approaches with more epistemic approaches because beliefs tend to be beliefs about what will happen when something is done in the world often not necessarily and I'm curious how this paper and this kind of underpinning relates or help to inform where you were going with a new paper that was intriguing about the folk psychology and how maybe that that there's some sort of that's where one of the bridges out from this paper into you know folk psychology being how we kind of intuitively believe the world is based on our experiences so there is a certain element of like whether that's how it structures in the brain it's kind of how we might structure it before science tells us something else in a in a certainly in a socio-cultural sense so I'd be interested in how that extension plays out with this paper yeah so so there's definitely a direct connection between this paper and and that one I mean I wanted to write a sort of sequel to this paper where I talked about specifically desires and how those the the sort of the way those would be grounded in thermodynamics according to this picture or co-native states in general like the states with a world to mind direction of bit um and so then I started talking to to Ryan Smith and others about this and we ended up you know co-authoring a quite different paper that has some commonality but I still want to write um I still want to write one that's more on the thermodynamics side of things but the connection is in this you know the sort of identity thesis that at the end of this paper I argue for it by way of David Lewis who who one of the terms in the identity is the implicit definition that folk psychology gives us of mental states um so we haven't really gotten into that maybe we will next week or something I don't know there's Ramsey sentences I'd love to talk about the we got we got the Ramsey sentences we got the sleep wake simulation yeah other topics that we will really look forward to exploring yeah let's please get into that but um like basically I mean I don't know as usual I feel I always feel like I'm not doing an adequate job of answering questions I'm sorry but um by folk psychology in this context I'm really thinking of just our it's sort of like theory of mind right it's like our implicit theory of the way people including ourselves um function so I think that that's essential even though it's not highlighted to the end that's essential to what I'm doing in this paper because if you don't have mental states to define then you don't have any identity relation because that's one of the terms in the identity relation although you could also I guess maybe if you're doing pure Bayesian modeling you could just write down a bunch of states and equations for how they update um without thinking about folk psychology at all but but I guess the point of this is that um if one of the terms in this relation is the mind well I think folk psychology I don't actually love that term I mean it's okay it's just a philosopher's term of art right I just mean intuitive human psychology um I think is one of the terms in this relation cool scientists are folk too so cognitive diversity means that we have all different ways of knowing and there may not be one default or simply intuitive way of working or knowing because what's intuitive to one person will be out of the groove for somebody else and that's why the movement towards pluralism in terms of our understanding of ideas and of ecosystems is so interesting to talk about and see connected to active inference any um thoughts or what would be something that somebody is excited from the conversation today or thinking about how we could resolve or at least introduce again next week I think a few of the things we have written down is the sleep wake simulation what is that agent doing what does it do in terms of your model we have the Ramsey sentences and where does that style of logical representation come into play we have empirical validity what kind of experiments are you interested in or curious about what what else could be interesting we could still talk about the the contrast of divergence versus simulated annealing thing um I just really briefly wanted to follow up and say I think when I say intuitive psychology I mean I don't mean to suggest that there's just one of those I think that but I do think we can sort of average over different perspectives to get that so like the individual perspectives ultimately are what's real but if we want to talk about things at an abstract level we could average over those and I think that's kind of what Lewis tries to do so maybe this is something we'll get into next week um but really he when he talks about folk psychology he he he engineers it so that it doesn't have to be that there's just one set of statements that embodies what people think about psychology it's like a disjunction of different sets that mostly overlap um and just yeah I mean pluralism is really I think is um is a good ideal in many contexts um I actually think this would be interesting this might be controversial I think it's not always the goal in science because if you start out with pluralism then you just don't do anything right it's like okay everything's fine as it is but I think sometimes we're seeking a unifying underlying explanation and if you accept pluralism at the start then you won't get there that tension which is okay pluralism everybody must be pluralist and this is the only way it's actually absolutist and singular at a higher level and so pluralism means that we each do have our perspectives and yet there is a way of working it's not just the same thing as saying we have an absolute answer and it's that there's no answer that's the kind of local logical paradoxes that we see all around us that's a great point Stephen and then anyone else yeah that's that's the um the rabbit hole of postmodernism oh we don't want to go but it's it's good it this is a really rich conversation and uh I think like you say it's uh to be able to jump from Helmholtz to some of these transcultural things is uh you know it shows you that it's pretty foundational some of the stuff we're touching so that's great so thanks for a lovely talk today I really enjoyed it thank you yeah thanks a lot any closing notes there Alex otherwise we're all gonna probably re-listen and reread and think about what would be good to ask next week yeah I know thank thank you all it's it's really fascinating and fun and uh just want to do more of it next week I'm very uh I didn't have anything else to say but I kept speaking I'm not sure why same we do that do that sometimes it happens so thanks again everybody who's participating live and on live stream and we will see you next week