 Hello, everyone. Welcome to Acton's Lab livestream number 25.2. Today is July 13, 2021, and we're going to be talking about this paper, The Computational Boundary of a Self, Developmental Bioelectricity, Drives Multicellularity and Scale-Free Cognition. We are here with the author, Mike Levin. We are a participatory online lab that is communicating, learning, and practicing applied active inference. You can find us at the links here on this page. This is a recorded and an archived livestream, so please provide us with feedback so that we can improve our work. All backgrounds and perspectives are welcome here, and we will be following good video etiquette for livestreams. Here at the short link, you'll find all the livestreams that we've done so far, and we're going to have a new series of livestreams coming up in the next semester. So we're going to be jumping off here with the author discussing this paper, and we will just do a brief round of introductions. I am Blue Knight, and I am an independent research consultant based out of New Mexico. Yeah. My name is Sarah Davis. I'm a master's philosophy student at this point. I'm past engineer and science artist, just generally interested in how the world works. Michael? Yeah. I'm Mike Levin. I'm a professor in the biology department at Tufts University. I run the Allen Discovery Center at Tufts, and I'm also a associate faculty at the Peace Institute at Harvard. Cool. So, Sarah, what is something that you're excited to talk about today or something that you liked or remembered about the paper? Maybe want to give a heads up? The most interesting kind of through line for me in all of Michael's work that I've looked at is this connection or this maybe tension? I don't know. The way the morphology plays with electricity, just yeah, which ones in the driver's seat are some kind of like understanding of how those two work together. Awesome. Is there any like burning questions that you have that you want to start off with, or otherwise I can just start? No, I feel like that is going to kind of happen down the road, that particular focus. So please, yeah. Okay. So here we are on the paper. We're going to be discussing today. How do multiple nested scales of individuality work? Yeah. So I want to practice this by saying that, and maybe this is obvious, but I just want to say that all of this is very much a work in progress. So I'm going to tell you what I think about these things, but I certainly am not claiming that this is all worked out or that I'm not going to change my mind at some point or that this is not going to develop in some fashion. This is under constant work. So I think the fundamentals of all this are simply that there are multiple interpenetrating systems at different scales that are all present simultaneously, all of which can be profitably looked at as individuals. And so this diagram here, it looks like all it's saying is the mere sort of fact of a physical structure that if you look, you see organisms and within that you got organs and cells. So that's not the whole point of this. The point isn't merely that they're arranged this way, but that you've got subunits and those subunits have themselves, subunits, all of which are cells. And what I mean by them being cells is that the central claim of the paper is that they are to some extent, so not binary, but on a continuum of agency, they are to some extent goal directed agents. So they're trying to achieve certain aims. And so when I say that they're nested, all of this simply means that you can have systems where the different parts are all actively trying to achieve various things in their own spaces. And we should talk about that. That's an important part that developed since this paper was written is this idea of different action spaces that these things are working in. But they're nested in the sense that I think just because a higher level system acquires a goal or is pursuing a goal, that doesn't mean the lower systems aren't doing it. And all of these things are in fact, simultaneously doing their best to pursue various kinds of goals. So that's what I meant by multiple nested individuals. Awesome. So is there a total 100% of individuality that gets partitioned? And something that I've been really interested in is how is information passed forward and backward across levels? Like is there some kind of core screening or like salience like dominates the information to pass forward? So how does that kind of information flow work? Yeah. So the scaling and the relationship between layers is of course one of the most interesting things here. So one of the things we are currently trying to do both experimentally and with modeling is to really show some very tight examples because this paper is largely qualitative, but we're now trying to show some very specific examples where we can actually track the scaling. So you have lower level subunit. So let's say you have cells that have only local metabolic goals. So all they know how to do is pursue some energy level in a kind of homeostasis to stay alive. And so the question is, how do you connect those in a way that is then going to give rise to a larger agent that has much larger scale goals that can be working towards a state of affairs and to be actually stressed by the failure to reach a state of affairs that is way bigger than any individual cell. And so I'm not sure that there's any kind of conservation here in the sense that there's only a total amount that you sort of have to partition it among the levels and that's all that there is. I'm not sure that's, I would make that claim. I think that the total amount of individuality can change over time and it can rise and fall. And in fact the boundaries between the different agents can grow and shrink. That was a significant part of the paper talking about that. And there's a couple of interesting things that get passed upwards so to speak. Definitely a kind of force-grain in the sense that if you are a system whose parts are themselves competent in getting various things done despite changing conditions and so on, they're not just the hard-wired mechanisms. They're actually competent to pursue specific outcomes despite perturbations. What it means is that the larger system is now working in a much easier space in the sense that if it doesn't have to micromanage all the microstates that the individual pieces are doing, but it can rely on those components to get their job done, then what you're really working in is a much lower dimensional simpler space where problem-solving gets easier. And just as a simple biological example, we now know that, for example in the tadpole, if you activate a signal that triggers, that happens to be a bioelectric signal that triggers eye formation in another part of the embryo, let's say on the tail, then all kinds of interesting things will happen. It will build an eye, it will recruit cells that you never directly manipulated, so it's a local sub-organization process that eye will form even though it's sitting in the middle of the muscle instead of in the brain where it belongs. All of these things are going to happen anyway. And by the way, those tadpoles can then see out of that eye even though it's on the tail instead of in the head. So if you are a system that needs to solve some problems in terms of I've got some eyes and I need to do this behavior that I'm doing and so on, if you don't have to micromanage all of those components and saying each cell of that eye where exactly it's going to go, what exactly it's going to attach to and so on, if there is an effector that you have access to, an affordance maybe, that says build an eye and you as a larger system don't need to know how that's going to happen, you can just trust that that is going to happen under a wide range of conditions. You have a much simpler space to work in and the problem solving that's one of the things we're doing now is trying to map out some of the different spaces in which all these different agents are actually solving problems and trying to define what that means. So one of the things that gets passed out is this course grading because of the lower level competency. So that's I think a key thing and maybe we'll get to talking about what that does for evolution because I think it's massively important. Nice. Sarah? I don't have a, unfortunately they've moved to change the interface so I don't know where the hand is anymore in this, but one question that keeps coming up for me and all the different realms of discussion about this is like, well in particular with the bioelectricity part, you know, you use the word subroutines when you were talking about, you know, we think we're starting to understand the whatever, combinatorics of how this works and also related to these nested levels and things like this. I just keep wondering if it's your sense from all of this work that you've done that in nature, in the communication between these levels, that there's any amount of abstraction or semantic layer that's needed for that to happen between levels, you know, and also with the electricity like subroutine implies that there's some kind of semantics that something has to rest on and that just keeps coming back to me. Do you have any sense about that? Yeah, so a couple of things. So one thing about bioelectricity is that bioelectricity I think is interesting not because it's in some sense, you know, magical or there's something, you know, sort of uniquely better about bioelectricity than biomechanical or biochemical modes of interaction. What I think is really interesting about bioelectricity is that that is what is being used and perhaps these other modes are too, we don't know, but bioelectricity for sure is being used as the computational medium by which the lower levels bind together into the higher level cognition. So the higher systems cognition, whatever level it may have. So what the bioelectricity allows us to do is to directly peek into that computation. And so I think that's why it's interesting, which is no surprise when people do neural decoding, that's exactly what they hope, right? Is that by tracking the bioelectricity, they learn something about the information content of the global system, right? What is the animal or the human thinking about? So it's exactly the same here. So that's what's unique about bioelectricity. And I think that what evolution discovered very early on is that and then it capitalized on that by pivoting it into neural kinds of systems and we do it in our computational devices and so on, is that what evolution found really early is around the time of bacterial biofilms actually is that electricity is a great way to process information, to integrate information across distance and to do computations. That shouldn't be a surprise to anybody. So I don't know if I would claim that it has any kind of a formal syntax at these levels or anything like that, but what it definitely does do is a kind of, when I say subroutine, what I'm mostly leaning on is this idea of modularity, but not just modularity in form, because that's been discussed a lot in evolutionary developmental biology, but actually modularity of function. So the key, the thing about a good subroutine is that you're going to, when you activate the subroutine, you can move along and assume that everything that's necessary to get that job done, given whatever local conditions or current events or whatever else is happening, that subroutine is going to encapsulate everything that's needed to get it done, meaning it's not just, it's the difference, so for people who code it's the difference between sort of macro substitution, where all you're doing is just, it's a shorthand for a bunch of hardwired steps, which plopping that in. I don't think that's the magic here at all. I think what it is, is it's a degree of competency in that subroutine to get something accomplished, and that has really important implications for evolvability and so on. And what the bioelectricity allows you to do is to basically to have a language in which the system can call up such circuits with specific behaviors at different points or at different times. That makes it really powerful. But the question of whether it has any kind of formal syntax or anything like that, I don't have good evidence for that. I don't even mean like a formal syntax. It's almost like, and this also holds true between these layers of, these nested layers of organismal, whatever. If you have a sense that there's any abstraction, there's any abstraction layer needed, and I guess you kind of answered it maybe by saying, you know, that electricity is this medium on which things communicate. But yeah, anyway, I don't think it's an easy, it's like, you know, easy to answer question, but that's the theme that keeps coming back to me. I think there's a lot of abstraction in the sense of, and I don't know if this captures what you're asking, but there's a lot of abstraction in the sense of generalization in terms of, another way to say it is core stringing. Voltage itself, let's just start with that. Voltage itself is an abstraction for cells because, and in fact, people will often ask if I say, you know, there's a bioelectrical signal, they say, well, is it the potassium level? Is it the sodium level? Like what, you know, is it the particular ion channel gene? Is it the voltage or is it the current? Yeah, and in particular, so the cool thing about bioelectrics is that with that question has been answered, and we know, we know the answer, it's neither of those things. So it's a pattern of voltage that is specific for the downstream effects, and it doesn't matter how, with very rare exceptions, it doesn't matter how you got to that voltage. So you can use sodium channels, you can use potassium channels, chloride, what ion you used, all those micro details don't matter at all because what the collective is keying off of is the spatial temporal distribution of voltage, and you can get exactly the same results with sodium or potassium or chloride as long as you're, you know, you're driving the right voltage patterns. So voltage itself is a really cool abstraction because it means that all the down, the underlying mechanisms are free to diverge in evolution. You want to swap out a potassium channel for a sodium channel, you can do that as long as you've got the right functional properties so that your, you know, your overall pattern stays. And we use that quite often in our regenerative medicine applications with a really convenient, convenient property. So yeah, so the bioelectric code itself is definitely an abstraction over what the micro states in terms of specific ions or the specific channel genes that got you there. Nice. Stephen, did you want to say hello? Yes, hello. Hi there. Hi, Michael. Thanks for joining us. Yeah, I was curious. We had a really interest, Chris Field gave us a really good insight into the quantum contextuality idea of how things could build up from different sort of space states at very small levels. And active inference often uses non-equilibrium steady state attractors as kind of a way to look at some sort of statistics between the scales, you know, there's some sort of flux or flow that's inferred. And I find it really useful thinking about it with this cone idea that you bring to the table because it, it, there's more, there's more of an idea of a distinct swarming going into a structure and some sort of system that might emerge from that, which is sometimes lost when it's very much in the math. And I'm curious whether when you think about these, the bioelectric effects and the same sort of question came up with quantum contextuality maybe is, is, is there more of a distinct scale at which there's a coherence, you know, like you got the cell membrane, then you got a coherence around which the bioelectrics can operate, so to speak. And then you might have another distinct level at which the bioelectrics can operate, which is maybe more than the more gradated idea of steady state statistical sort of manifolds. So I was wondering what your thoughts are about how and where there's sort of scales at which the bioelectrics become particularly coherent, or if that even happens? Yeah, I mean, so, so again, prefacing this by saying that the, the bioelectrics is not an essential part of the story in, except in so far as I think morphogenesis and is, is a, is a nice example of an unconventional agent within which to play with these kind of ideas, you know, so, so, so what I would like to do, and this is a framework that I've been developing recently is to get away from the standard sorts of living creatures that we're used to, right? So, so the products of that one trace of evolution in the, in the biosphere on earth and to go beyond that and to look at the space of possible agents, the space of possible bodies and the space of possible minds that those bodies will support and think, you know, think more broadly. So in, in doing that, what I think the bioelectrics in morphogenesis, I think that morphogenesis is a nice example of an unconventional agent that lets us see how the framework is to be practically applied, right? So, so people say, okay, this is all, all well and good, but what, what, you know, what use is it? Okay, let's, let's apply it to a specific thing, then really no one thinks of as a, as a cognitive agent. I'm going to show you how all this stuff maps and why it's useful at the bench and it helps do new experiments and so on. So if we, if we accept that morphogenesis is kind of a test case for, and there are many others and certainly not unique, but it's a test case for an unconventional place to find agency and all of that and to test out how well this theory helps us out, then what's cool about bioelectricity is that it actually allows us to tell a very specific mechanistic, testable, empirically useful story about how the information flow happens and how the levels scale. But it's just one, you know, it's just, that just happens to me how the morphogenetic self does things and there are plenty of other interesting, you know, cells that wouldn't have anything to do with bioelectricity. They just want to make that, make that clear, right? The bioelectricity is not, so, you know, some sort of essential element that's always going to be present whenever we apply these things. It just sort of, it's central to the, to the example case that I've tried it out, which is, which is morphogenesis. Now, having said that, bioelectricity is definitely multi-scale because you have, you have, you have bioelectric dynamics in organelles in a very small scale. You have the resting potential across the cell membrane, which is mainly what we study. But those scale into tissues and you have global electrical fields, which are driven by transapithelial potentials on the scale of tissues and organs, which other people like Min Zhao and many other people study. And then past that, you have whole animal scale, like whole body scale fields as well. So all of these different levels, you know, exist. Some of them more and some less rely on bioelectricity. But yeah, it's relevant in this particular unconventional agent. It's relevant to all the scales that we know. Okay, that's really helpful. Thanks. And would you say that the cone concept could extrapolate it to other types of multi-scale piston blanket type ideas? And I was just wondering what your thoughts on that or whether that just confuses things to try that. But no, I don't think it confuses now. Actually, I'll say I'm for sure not. I mean, I like Carl's ideas very much. I'm by no means an expert on the map and I'm not going to pretend that I understand all that. But I think that the thing about these kind of diagrams is they're not meant to be unique or exhausted. So the goal of this originally, this came about a couple of years ago at a Templeton meeting where we were asked to brainstorm ways to define almost the sort of eigenspace where you're able to compare directly really diverse intelligences. So things where you can't just measure all their brains and then see how they shake out. It's really, really diverse intelligence. So what I tried to come up with is a scale and it's all focused around the spatial temporal scale of the goals that any given agent can work towards and the scale of states of affairs that can stress out that agent when they're not being met and so on. As a way to pick on something that I think is central to all agents, no matter what they're made of, no matter how they got here, evolved design, combinations, kind of errors, whatever, wherever they came from and whatever they are, what is essential to them. And I picked this goal to recognize, but this is compatible with all kinds of other systems. So you could overlay on top of this almost anything else that you thought was critical about being an agent, just start adding dimensions, you know, and absolutely, right now you can just, you can, this is compatible with all sorts of things. This is not meant to rule out any other, anybody else's, you know, favorite framework. This thing captures one slice of what's essential about being a self. Thanks, that's really useful. Actually, can I just one more thing, I was actually really interested in this, I do a lot of work with community psychology. So there's this challenge of going from the self, there's like the self and then there's society, social, but the inter subjective kind of group dynamics is there's a big problem because it's like a big hole because it and this I find this really useful because a lot of stuff which talks about how groups they often try to project out into the future as if it's a light beam that you could measure whereas here it's much more, you know, like it's sitting in the swarming. I like to think the bottom of the cone is almost the swarming dynamics that go down into the past. You've got the kind of structure of the cell that's emerged and then the system that's able to go into temporal depth in the future and the code. And I think that that offers a nice way to think about group dynamics, particularly maybe the more ironically the more challenging group dynamics where there isn't a coherent single narrative. So there might be applications there. Yeah, this is super interesting. And I should say one of the things about the cone and we had a discussion about this yesterday and Pranab Das pointed out that my cone compared to Minkowski's cone is upside down. Right? And the reason I did it this way was because if you think about where any given agent, the only thing the given agent is really certain of at the moment is what's going on right now. So your future obviously is uncertain. And when you're trying to predict, so let's say you're trying to manage goals and you're trying to predict, you've got some predictive capacity going forward, that predictive capacity doesn't get better over time. It gets worse over time, right? You're sort of exponentially more uncertain about what's going to happen later on. So your ability to pursue those goals in the future gets smaller and smaller. Same thing goes back in the past, although you could argue that that's linear and not exponential. But again, what you're doing at any given moment in time is inferring what your past was from the n-grams of memory that are available to you. They might be a brain, they might be a stigma, the medium that you live in, you know, within if you're in the end colony, whatever it's going to be. You've got some kind of material in which you leave memories, they leave traces. At any given moment, you are reading those traces, recalling them and reconstructing what you think the past was. That's your only evidence for what the past was. You know, any given moment, you have no idea what actually happened. And so even going backwards, my cone comes down because your ability to go backwards in time and be certain of what went on and what are the facts on which you can now build your goal-directed strategy going forward become more and more uncertain going back. So that's why my cones are upside down in this, so to speak, because also because there's no particular reason this has to map exactly onto, you know, Minkowski's formalism, but I think it's close. Just because your level of certainty is highest right now, both in space and time, and when you step away from that rapidly, you know, at least for all of us, rapidly get smaller. Yeah, we were really actually talking about this last week in our discussion and we were kind of thinking about this as like an information cone, because you have the most information about your environment at this point in space and less information as you go forward and backward in time. So do you think that that might be the correct way to think about it? So that's true. That's absolutely true. It's definitely also an information cone, but I want to be clear about this because this diagram is not a diagram of what you can sense or how far your effectors range. What you just said about the information certainty is the consequences of that are that you also have a limited cone in what the diagram really is supposed to show, which is the scale of the goals towards which you can work. So because the information is smaller, your ability to pursue goals towards how far into the future am I actively doing, it just becomes limited, your ability to do this. So it's sort of a secondary consequence of that, but I just want to be clear that this is not meant to be fundamentally a picture of the information certainty of sensory capacity of effector range or anything like that. This is a diagram in goal space. Nice. So one of the other ways I've been thinking about it is kind of like a possibility cone. So like as you progress forward into the future, we are left with one possibility and that's like death. So your goal at that moment is death. And similarly in the past like we all come from a single celled like, you know, fertilization event. And so like we all started an event and ended an event and like the maximum possibility is now. So what do you think about that? Yeah, I mean, so that's a really interesting point, right? And people talk about, you know, great transitions in cognition along the continuum and so on. And so I think one great cognition, a great transition is the following. If you are a creature with a horizon with a cognitive horizon of, you know, a half an hour, you're a goldfish or something. I don't know if this is factually true, but you know, you're some sort of form that has a cognitive horizon of a half an hour, right? What that means is that all of your major goals, survival, and these other things, they're totally achievable, right? Because it is completely plausible that you are going to make it for that half hour that you can project forward. If you are a human and your cognitive horizons, your goals are, you know, I don't know, world peace or your buildings, you know, larger things, you are for the first time, perhaps along the earth's lineage anyway, you are able to comprehend goals that you have no chance whatsoever of achieving because you have a limited lifespan. So now that's a really sharp transition, you know, kind of an emergent transition, because for the first time, you are able to undertake goals that you know you are not going to complete guarantee, right? Whereas the fish has some, you know, maybe you'll get eaten and maybe you won't, but it's totally plausible that you, you know, that you live for the half hour that you can see forward for the humans. And so I suspect that this is, and this is, you know, sort of way above my pay rate to know details about this, but I suspect this drives a lot of the weird facts about human psychology because you now have this fundamental stressor that other creatures don't have because you're able to foresee all these goals that you are definitely not going to meet. And that's a novel, you know, sort of a conflict between the different modules that doesn't exist for, I think, for most other creatures. Yeah, that's really interesting. You know, something that I might be like uniquely human is like this possibility to see past like the cone of our own existence, like into the future with these like future possibilities or future goal states and also like backwards into the past in like a historical sense. So we all like look at history. So I don't know, I like thinking about like, you know, I was reading your recent paper, this integrating evolutionary developmental thinking into scale free biology, this paper with Chris Fields. And I know like when we spoke yesterday, you mentioned you were integrating these ideas maybe into your next paper. But in that paper, you guys were kind of blurring the boundary really between the processes of evolution and development and, you know, phylogenetic memory that's contained in DNA and results from evolution like transforms into this ontogenetic. Yeah, definitely. So there's a couple of really interesting things related to that evolution that we can talk about. And that's that's something I'm working on now. And this thing should be done in a couple of weeks. We'll see. Two things. Number one, one of the things about one of the amazing things about biological hardware that's given to us by evolution is that it is there's a default way by that I mean cells with their genomically determined proteins, whatever they have, there's a default and a pretty robust consistent default outcome, right? And so so embryo, you know, fish embryos make fish and frog embryos make frogs and so on. But there's an incredible amount of plasticity there so that those exact same cells genetically wild that so the hardware is all the same are in fact in other types of environments are able to make completely novel creatures, right? And then so some of the so synthetic synthetic biology, synthetic morphology is already showing this we you know as our xenobots and things like this. And there'll be tons more going forward that what's interesting one thing that's interesting about those novel creatures is that you don't have a long history of select specific selection and frozen accidents and all this you know phylogenetics to lean on when you ask why do they do certain things, right? So for every normal animal in the biosphere, you say why is this thing, you know, green and why does it fly and how come it has these intent? The answer is always the same. Well, because for millions of years the ancestors were selected for this and that. The fact that we can take those embryos those cells rather cells cells from from those kind of animals and put them together into a new creature that has goals and behaviors morphological goals, behavioral goals, physiological goals that basically appear overnight, you know, within 2448 hours they sort of show up. And then the question is where did these things come from because what they didn't do was to be honed over millions of years by specific selection pressure. So that's a very interesting to me aspect of the plasticity here that it is not, you know, the history doesn't necessarily determine them other than the default, you know, sort of variant and where does that come from? And then there's the whole thing which, you know, I don't know, you can tell me if you want to dive deeper into this or not, but the whole the fact that you have multiple layers and that you have competency going down, you know, sort of all the way down has massive implications for why evolution works so fast, right? Why actually anything is able to evolve on the on the time scale that we see. So I think, you know, if you're interested, we could get into that. Yeah, that's great. Why don't we unpack that? Okay. Yeah, let's let's let's talk about that a little bit. Imagine two kinds of creatures. And you've got one kind of creature where the genome has a pretty direct influence over what the anatomy is going to be. It's everything is hardwired in some in some sense. And so let's say let's let's let's visualize you've got let's let's say it's a you know, it's a frog of some kind. And and there's the genome and it makes a frog and that's what it does. And so now now we imagine evolution is sort of searching the space and now there's a mutation. And the mutation does two things, because most mutations do multiple things. Let's say we got a mutation that does two things it moves the eyes off kilter, right? And it also has some really beneficial effect somewhere else, you know, it helps with some other things as most mutations are pleiotrophic. So it's a reasonable reasonable conjecture. If you're if your animal is the hardwired kind, the eyes are in the wrong place, nothing works. You the evolution never gets to explore the positive benefits of this other mutation, because the thing's going to be dead, you know, it's just it's just not a workable animal anymore, the fitness is very low, it's gone. And so most when if you think about these kind of hardwired things, it's the same when I when I first you know, now 100 years ago when I first heard about how evolutionary algorithms are supposed to work, this idea that I've got a system that does something we start throwing random changes in anybody that's written any code that they crazy, of course, everything's gonna get worse, not better, that's never gonna work. Now imagine, imagine a different now, of course, it works, but it takes it takes massive amounts of time and so and there's always the difficulties of all the complex things. Now imagine what we actually have the actual and I'll explain the biological example, it's when you have a you have a tappal and the tappal has to move all of its organs craniofacial organs around to become a fraud. So the jaws have to come out, the nostrils move, the eyes move forward, like everything, all this stuff moves around. And you could have a hardwired animal in which every every organ moves in the right direction, a particular distance, and that's it, right, and that's your hardwired thing. That actually is not what how animals work. So so if we and so what we did a few years ago is we make what we call these Picasso tappals, the idea is you make these tappals, everything's in the wrong place, the eyes are on the back of the head, the mouth is off to the side, the nostrils are up here, like it's just scratched, like Mr. Potato, and everything's scrambled. So if it was a hardwired system, then all of these things would move the traditional amount in the traditional direction, and everything would be way off because you're starting in the wrong spot. Instead, what happens is that all of these organs move in novel paths, and they keep moving, and sometimes they overshoot and have to come back, but they keep moving until they build a correct frog face and then they stop moving. So now that system so so what the genetics actually gives you doesn't it doesn't somehow give you a hardwired set of movements, what it does do is give you an error minimization scheme that can progressively deform the system until the error from some sort of set point, and we've been studying what the set point is, we can talk about that, until the error from that set point gets to an acceptably low level. So what you have here is a competent set of craniofacial organs that are going to move around and get to where they need to go, even if they start off in the wrong position. So now look at what happens with evolution, you got your mutation, the eyes and the jaws now are off in the wrong place, but now you've got this benefit somewhere else. The thing is the eyes and the jaws are going to get fixed, you don't need to worry about that because they're going to, they know where they belong, they're going to get there even though they start out in the wrong position. So that aspect, that negative aspect of that mutation gets hidden from selection for a while anyway because and eventually it'll canalyze into the genome itself, but for a while it gets hidden from selection allowing evolution to explore the benefits of the other benefits of that mutation. So here's all in summary, here's what this means, the fact that you have competent subunits and they're competent morphologically, they're competent physiologically, we can talk about what that means. The fact that they're all competent means that the fitness landscape is much less rugged, it's much smoother. It means that the effects of mutations are much more linear, meaning that you can examine each consequence independently because the other consequences can be masked by the local environment being off, it doesn't matter, it'll things still get where they need to go and connect and so on. So it makes the search process way more easier by allowing you to and it gives you patience, it means that evolution doesn't have to solve every problem at once. You don't have to wait until you find a mutation that gives you a positive impact and leaves the eyes where they need to be. You don't need to wait for that, if you can grab the first mutation you find it gives you the positive impact because the eye thing will get taken care of because those subunits are competent. So it raises the IQ of the whole system, the search becomes a much smarter search and it has some patience, it's able to see linearly instead of sort of highly mixed up all of the consequences of these mutations and it smooths the fitness landscape. So I think when you can count on your components to do some of the heavy lifting, you don't have to micromanage all of it, the evolutionary search gets massively more efficient and now it starts to become a little more plausible that we actually see this amazing biosphere actually evolved by a process of random mutation. Nice. So in that same paper you talk about random mutation but then you also describe this kind of modularity of gene groups and the possibility of entire gene families like the hox cluster duplicating and driving the major evolutionary transitions. Is this a sort of non-random process, sort of ergodicity breaking with perhaps like non-linear effects when these entire clusters? Yeah, I hesitate to say too much because I don't think much is known there that I can say definitively. I think that this whole issue of random and non-random mutations is very controversial obviously. I suspect that there is a notion of evolutionary search that's not quite blind. So people often will argue either they, most people nowadays say evolution is completely blind. It just makes these mutations whatever happens is not because of any directionality in the search space and then you have people who think that it's directed towards specific outcome. I think there's a sort of in-between scenario where it's definitely not directed in the sense that it can see far forward in the search space but it doesn't have to be completely blind in that maybe what it has and this comes out of some discussions I've had with both Chris Fields and Richard Blottsin that maybe instead of a point in fitness space what you really have is a little bit of a vector. You have almost like a comet trail and then it gives you a little bit of information about direction and about you know and you can imagine, I can sort of design in my head an epigenetic system of marking things on the DNA that give you a little bit of information. Well, what was this allele last time right so that you can sort of say well I've been turning this knob in this direction and things are going well so maybe I want to turn it some more that kind of thing and this is totally made up. I'm not saying that we know that biology actually does this. Maybe it doesn't and maybe synthetic biology maybe we implement this on you know just for the first time we engineer it but I think it's possible to make a system that does that and maybe collections of genes and some of the complex chromatin epigenetics that go on maybe they're part of that process but this is this is total conjecture at this point. So Sarah I may have a question but I'm going to use my moderator's privilege here and just ask do you think that this vector driven mutation could be perhaps driven by active inference some kind of vector between exploration and exploitation happening? Yeah and so one of the things that we have talked about and that I'm going to write after this next thing is really on this idea that maybe the whole evolutionary lineage of a particular the animal or plant or whatever it is maybe that's an agent as well just stretched out massively over space such that each at any given point in time all of the all of the existing examples of that species are in fact hypotheses about the space right and that what you're doing what that what you're doing is is you're constantly generating genomes as as hypotheses and some of those genomes are you know proven to be some of those hypotheses are proven good and some are not and then the next you know sort of the next set of genomes is not completely random but it's in some way part of this part of the active inference of this giant is sort of virtual agent right so I think there's a story to be told like that but it's way more hand waving than details at this point. Awesome thanks Sarah. Now my comment is now maybe a little bit stale but I think I'll put it out there anyway just for the sake of making saying something concrete because a lot of this feels a bit abstract like and I'm hoping I'm even remembering this anecdote right but I remember learning that some gene once upon a time that gave us an extra rod or cone in our eye allowed us better color reception also was the same gene that shut off our ability to make our own vitamin C and if that's correct if I'm remembering that correct that's such an interesting example of of yeah of course it's an example of the of the fact that genes do multiple things but but it was kind of like isn't that coincidental you know that that this that that gene is is doing you know both something for the eye but then also making it so that you can get your own vitamin C you know by by looking at fruit and I just thought that was that that kept coming to me as an example of what you guys are talking about but it's stale sort of thanks Sarah I'm sorry for making your comments stale Stephen yeah sort of following on it is this you have this idea of like a goal space or a teleological space in a way that's really interesting and then with that exploit or explore you I wonder if there's a time when there's there's there's a thought about when things are very directed and of course our western thinking we tend to think about a nice pointed cone where we go into the future we because we sculpt the world around us and we maybe have like a distinctive trajectory because we kind of we're constructing our world more and more which is probably part of our problem to make it the way we want it to be whereas particularly if you look at indigenous cultures they would have you know a cyclical the more cyclical approach which would go through seven generations or different levels of generations and would be a tuning to the environment and I'm kind of curious whether and again this might be where we don't really know but is there a point at which there's a generally trying to come back or maintain in the allostasis the kind of the cyclical sort of pattern that is where we want to go to and is there a time when it's like okay we're we're branching out now like we've made a jump and we're going to try and inscribe our environment at different scales from the body out to sort of now maintain some new kind of steady state attractor and how that might fit with this yeah I mean I think I think you're absolutely right and that's some future version of this is going to have to be able to deal with goals that are not only not static meaning they're cyclical but also goals that are themselves you know let's say second-order goals like to like like my goal is it's a derivative my goal is to get better at something it's not a particular level that I'm looking for but I want a particular slope of some variable being you know a particular right so something like that so so you're absolutely right I started out with the simplest you know this is sort of the hydrogen atom of this thing so I started out with something very simple that it's homeostatic as in your your goal is a is to achieve a particular region in whatever space you're solving problems in be that morpheus space or physiological space your goal is to find that region and sit there and that of course is not all there is to life obviously that's just the first you know kind of step I just did that for simplicity because I want to tackle everything at once and there needs to be a lot done here before we get to that point but yeah you're absolutely right your your goal state in that space may not be a point it may be some interesting you know bigger bigger bigger shape you know and this thing and and I just want to I'm going to say something about these goal spaces because I think it's it's it's interesting to point this out so what one of the in this in this this all came out of a meeting on diverse intelligences and so it was the one thing I've been thinking about is what what is intelligence and you know what are these agents actually trying to do and okay so so we have lots of examples of problem solving in three-dimensional space right so you've got you know crows and monkeys and everything else and they're running around and trying to solve problems by moving around in three-dimensional space but but all the different subunits of the of of bodies are solving problems in other kind of spaces so we talked about morpheus space so if you are an eye in the head of a frog you are working in a space of possible configurations of where you are relative to everything else and you are trying to move and and act in a way that is going to to to acquire to a particular region of morpheus space that describes that that the correct frog head that's that's where you're trying to land there's also a physiological space so I'll just give you a simple example of this that that we found recently so so these these flatworms these planaria uh you you take these planaria and you throw them in barium okay so solution of barium now what happens is the barium is a non-selective potassium channel blocker it blocks all the potassium channels all the cells freak out their head and their heads literally explode right over overnight you know their heads just degenerate they're gone if you take those planaria and you leave them in fresh barium over the next you know 10 20 days what they will do is they will build new heads that are completely barium and sensitive couldn't care less no problem at all they live in barium just fine so we we we ask the simple question and not not because the answer has to be this but just because that's the tool that that that was available we ask the simple question what is transcriptionally different about barium adapted heads from normal heads right so what genes are different differently expressed in in these in these heads and so what you find out is that out of tens of thousands of genes that these planaria have they managed to regulate a very small number on the order of of a couple dozen that allow them now to do their business despite the fact that potassium was no longer usable for them so now this is incredible because think about the problem that they're solving and I you know my head always visualized like one of those nuclear reactor control rooms with buttons everywhere so each button is a different gene you can you what what are you going to do you have a physiological stressor you can't pass potassium everything's everything's going to you have some tens of thousands of genes that you could turn on and off the combinatorics are astronomical you don't have time to try them all you don't even have time for gradient descent you don't reproduce fast enough it's not like bacteria where you could say well they're all random and then somebody will survive and repopulate ahead they don't they don't divide that fast you don't have time for any kind of evolutionary explanation within some small number of days you have to hone in on a small number of genes knowing that if you just start randomly flipping them up and down you're going to make things worse long before you make them better you're going to kill everything off you just randomly start so now the other important fact about this is that planaria never see barium in the wild so it's a planaria there's no evolutionary history of being good at this surviving in barium because they don't see barium so now to me this is an amazing example of problem solving in a high-dimensional virtual space you've got this you've got this um physiological space and your problem is your your problem is physiological your effectors are transcriptional largely although of course you have physiological effectors too but you're trying to walk in this in this space and you have to somehow map it's unbelievable how they do this you somehow have to map your effectors in in transcriptional space to what's happening to in physiological space and then rapidly get to a a workable a workable region so so that that gives you an idea of what of what we're talking about right these spaces can be transcriptional physiological all you know all all kinds and these things and they solve their problems and like and like you pointed out the correct the thing you're trying to achieve may not be a point is certainly not a point in that space it but it may not even be a static area it may be sort of I would like to sort of wander in this you know kind of weird loop that's that's my ball right as to you know as to wander in some sort of weirdly shaped attractor yeah well we'll get to that this is the just you know first step just just one oh sorry just just one question following up from that just before Sarah um so if I'm right and what you're saying is gradient descent because that would have to come sort of where you are and slowly keep incrementing you there's somehow some there's some goal states I mean there may be the some evolutionary history even if it's not barium something like barium but but somehow it's not it's not only going from adjacent possibilities it's it's somehow jumping ahead is that what you're saying so so so the first thing I'm saying is that I don't have any idea how it works right so so I do not I am not claiming I have a good model for what's going on here but but I think you're onto something which is that I think and I think we talked about this a few minutes ago which was abstraction and generalization I think you're exactly right I think what the system is able to do is is to say okay I don't know what this barium is but I've been terribly depolarized before it was an epileptic you know kind of a thing and this is an awful lot like that so I'm going to generalize what just happened to this other thing that I do know how to deal with right and I'm going to move in that direction so that's my that's my gut feeling I don't have a solid model for this and that's not been proven in any way um but I have a feeling that you're that that's where the answer is going to be it's going to be that each and that that that it's able to do this because it is able to recognize this physiological problem as an instance of a larger class of things for which it does know how to do and I think that when we talk about intelligence for all of these agents what we're really saying is it's your ability to navigate these various spaces uh efficiently and without getting trapped in local minima that they kill you right being being able to step temporarily you know it's that patience it's that it's that being able to step away from the direct line to your goal to then come around and um and get I have a sometimes when I talk about this have a picture of this is a this offense and there's a couple of dogs on either side of the fence and they're just like you know trying to get at each other right there's a hole in the fence four feet away so they could do it the problem is it temporarily requires you to get away from get further from your goal right so that ability you're trapped in this local local local maximum and um your intelligence is is directly uh sort of proportional to your ability to temporarily get further from your goal in order to actually get into a better position later on part of the IQ but I think the other part of the IQ is this kind of generalization right and I and I and I think you're right I think that's probably what is going to end up explaining nice Sarah um man this is great uh I this is this question is probably out of bounds but if you have any thoughts on it then great if not pass um I mean this uh the the cone type diagram and the the agential um helios that it assumes I'm just wondering if you feel like there's any applicability to to non-agential type phenomena like you know complex systems or hurricanes or whatever um because it just in when we're talking about abstraction um yeah I'm just wondering if if you have any thoughts about where that could go if anywhere yeah so so the thing I'll say about that and I'm and this is this is uh not you know this is the kind of thing that's um probably uh not not not a lot of people will agree with this but this is my view I to say non-agential presupposes that you can draw a binary sharp line right and between things that are agents and things that are not I don't really believe in that I think that's a view that gets us into a lot of trouble by looking at these looking for these sharp lines I think that all there is is a degree of agency and I'm not sure we are we talk about yesterday whether there's such a thing as zero I I don't actually I'm not even sure there's a zero but certainly there are there are things that are very very sort of non you know very very low levels of agency and and things that are very high and everything is a smooth continuum so now then so then then then the question is okay so what do you do with hurricanes and things like this so I drew um and this will be this you know in the next uh in the next paper there's a thing there's a thing that I call um it's an axis of persuadability it's it's it's it's it's sort of related to um Dennett's intentional sense in the following way there's a there's a there's a continuum and for any given system so so my claim is that it's an empirical problem you can't say whether something is or is not an agent lots of people do they'll say wow that's not an I think you you can't say that until you've done the empirical work and the empirical work looks like this all the way the the question is this as an engineer so this is very much an engineering perspective as an engineer how much work do you have to do uh what do you need to know about the system and what kind of work do you have to do to be able to control and predict its outcome so I'll just give you a couple of examples all the way on the left you have things that are like mechanical clocks so if you're dealing with a system that's like a mechanical clock you know you can put it on the left of this of this diagram because you are not going to uh reason with it you are not going to train it you're not going to be able to punish it you're not going to be able to rewrite it it's it's you know goal states you have to you have to know how it works and you have to micromanage the hardware and you have to make um uh you have to rewire the the the the physics of it to get it to do something else okay then moving on from that you have something like a thermostat and with a thermostat you're also not going to reason or pun with it or punish it but there is a separate set point and if you want the room to be kept in a different temperature range you don't have to rewire the thermostat you can change the set point so as and this has massive implications for regenerative medicine that we can talk about the way you relate to the system is is is quite a bit different now after that you might get to systems that have preferences in the sense that you can actually train them meaning you can provide rewards and punishments they will change their behavior because what you're doing is you're relying on them to be a kind of learning system they are they're able to do associated learning so they associate certain things you've done to them with with you with things that they did and they will shape change their behavior so now this means that we can now now think of what this means you know humanity has been training animals for I don't know 10 000 years whatever with knowing nothing about neuroscience it means that you don't actually have to know what's going on inside your system you don't need that micro level hardware information what you all you need is uh to to to know that that animal that that system is actually a that it has that level of agency that it's susceptible to this kind of interaction and how do you know you don't know until you've tried so we have no idea so one of the things we're doing now with Charles Abramson who's a behaviorist scientist we're we're actually writing a how-to manual a check you know a how-to paper on if you are given some sort of weird new synthetic system how do you know where along the and it actually has a bunch of a bunch of experiments that you do to figure out where you are along this this path so you can go further and and there are other systems where you know nothing about um what's inside and in fact the amount of energy that it takes to manipulate them is tiny because you can you can give them reasons as opposed to causes and you can you can give them information and and they will act on this information and do various things this course for sophisticated agents so all that is to say uh the question of where something like a hurricane lands on this on this on the spectrum I think we need to really resist the urge to just make pronouncements you know sort of armchair pronouncements about that and we need to ask a simple question which type of manipulation is that system amenable to and when you find out then you know how much agency it has now give you a simple example of how this works we and other people have done something similar if you think about gene regulatory networks so gene regulatory networks are a quintessential example of a of a you know dynamical system it's it's the you know deterministic and all that and one of the things we asked is we we simply said how do we know where on this spectrum it lands let's just try to train it and see what happens and it turns out and this is so we did this computationally and now we're sort of starting to do this at the bench and we we simply ask the question could gene regulatory networks have associated of learning capacity and that means that you could do interesting things you can do Pavlov dogs it's it's fair so you have a gene regulatory network you have some stimulus that causes it to do something so you have your own condition stimulus to cause the response you have this other thing that's a that's a um a neutral stimulus that normally doesn't do anything uh you pair presentations of the the ucs and the and the neutral stimulus together and after a while what you might find is that now take away the the struggle the original unconditioned stimulus and use the neutral stimulus and that's not enough to trigger the response because there's an associative memory that formed in the network right so now we actually tried this in the sense that we took a bunch of um known gene regulatory networks from literature we made an algorithm that basically checks for associative learning and low and behold we found two things we found that there are six different kinds of memories that these things can do including associative memories for many of them and that the biological networks have way more memories than you find in random random networks that you know you would just randomly concoct so apparently either directly or indirectly selection likes that so that's just an example of uh what you would do under this view with with dynamical systems right that that you would think oh you know these are trouble cases are they agency or aren't they I think that's where we get in trouble because we try to draw a sharp line I think the answer is there's somewhere on the line they could be way over on the left or not and you find out when you figure out what kind of um intervention and strategy avails you of the most predictive power with the least amount of action nice so we talked yesterday kind of about um you know cells that are contiguous and there's like connections and so this this concept of like a shared information making up uh an individual or like what consists can like constitutes the boundary you know if cells are connected via gap jumps and like you talked about how if there's a calcium molecule inside myself I don't know if it's from from me or from my neighbor because we share this kind of hole between us and so I just wonder if there's some degree of information sharing or some kind of like proximal like level does it have to be immediately contiguous to you know have that same like cognitive capacity or or like over a longer distance can this also function like for example if I get some kind of like necrotizing flesh in my leg is my toe necessarily gonna know like what's going on with my leg it's gonna take a while for that signal to diffuse over such a long distance so so where is that like the boundary or container space for this kind of information sharing that has to occur to make this cognitive cognitive individual like a whole a whole thing by itself so I don't know if you can say a little bit more about that yeah yeah um the biggest is so I think so I think one thing that I really need to improve in this whole thing for next time is a better visualization and I frankly don't yet know how to do it because this this way of visualizing it looks very physical it looks like what the cone is is that the cone is at the edge of the cells that are connected right it sort of gives this idea that that what you're measuring is the actual physical boundary of the system and that's not quite what I'm trying to get across because what what and and this is actually let's just think about the cells in the in the um in the cancer versus multicellularity right which is a nice example of how this works when you have a bunch of cells and they get together and they're going to build a kidney the the size of the at least the spatial size of that of that particular agent is going to be roughly the size of the thing they're working towards it's going to be the size of that kidney right that's the project they're all working on each individual cell is no idea what a kidney is but the collective intelligence of that group of cells it has a a goal that's trying to build that kidney if you try to deviate it from that goal it will do its best to to get back and do it anyway so what you would be measuring there is the the size of the of the state that it's trying to achieve not necessarily this you know where the edges of the of the now there's certainly some relationship there because in this particular example what an a as you said what enables the cells to work together is the fact that they're electrically coupled into one and chemically coupled into one giant network so so there's some relationship but it's not quite always the edge right you know i mean the and the same thing with the individual cells there's a it's it's it's it's related but it's not exactly the same thing when one cell defects so let's say an oncogene turns on the cell closes all the gap junctions disconnects its neighbors and now that computational boundary is literally just that one cell again it's not quite at the edge of the cell because if you look at the scale of things the cell is trying to manage it's a little bit bigger than that because it also really cares about um some of the you know the the nutrients the waste the the conspecifics the the prey whatever whatever else is going on right at that edge right so it's not quite at the cell boundary so i i just want to make that clear that that this diagram isn't meant to be an anatomical uh the cone isn't meant to be a map exactly on the anatomy it's a cone of the states that the thing is trying to actively manage yeah i think that the cones were inflicted by us though yeah but you're absolutely right and as soon as you did that i realized what what because that has to be right that those two diagrams eventually have to meet right and so there have to be there there has to be a way to so i think i don't know my my visual um imagination isn't great i have to i have to think a little bit more about a better way to to kind of represent it but but of course you're right those two things have to come together yes steve yeah i was just sort of following on from what you were saying is often as well as useful when we think of entities and processes i'm maybe scaling up to more psychological scales but you know we often think about this is this person this is what this entity is diagnosed as and this entity and it's all kind of and and there's a lot more now moved towards what's the process but then what kind of processes do we have on that continuum just like we're talking about with the hurricane you know you maybe you got a static entity like a rock and then you've got your kind of systems some of which could be mechanical like a clock but then you sort of hit structures there's some sort of structure like a i don't know like a skeleton or a network of processes coming together and then you get into them which is what i find really interesting with the work you do is you the sort of non-linear swarming dynamics you know and it's sort of and i was wondering how when you think of you know you've got a body and i'm a body and i'm trying to work in my space around me so i see things in my environment out there and i can interact to some extent depending on what my available processes afford me in my structures and then there's other stuff inside which to some extent i kind of have to let take care of themselves unless i want to tattoo myself or do some sort of internal mutilation or something so i'm curious about how this entity process dynamic going in and out can be thought of yeah um well i want to i want to say a couple of things uh and again this is you know sort of going going kind of far astring but i think i think it might be relevant um when you say you are this body sort of in the sense that we are talking to the verbal entity there but there are all sorts of other entities there that we're not hearing from that that and so so we know that from split brain patients right we know that um after a commissurotomy the right side of the brain you can communicate with it it'll give you opinions that uh the verbal patient part of the patient it doesn't agree with doesn't know anything about right so it's really really interesting um i also think about multiple personality you know they call it multiple personality disorder but but but the reality is you know a single a single brain body system apparently can host any number of distinct and how in how i am i'm not a i'm not an expert in this area so i don't know how i was psychologically deep so some of these other personalities go but enough that you can have a conversation certainly enough that you can have a conversation with them which means they're doing better than all of the ai that we've ever produced right and and they uh and in fact i i read once i forget where it was but there was an interesting comment from a a therapist who was asked to he was he was he was asked to deal with a with a multiple personality patient and he was talking about integration and he was you know but the the goal of the therapy was going to integrate because of course three disruptive for the for the individual and we're gonna we're going to integrate all of these into one and one of the personalities said to that patient integration you're gonna kill me right that's what you're talking about i thought you were a doctor what are we talking about here and so you know it was outraged that that there was going to be this integration of you know of of of of what at least ostensibly uh you've got this entity that certainly passes the journey test and is and is not happy about the fact that you're about to integrate it in you know away basically right so that's you know i think that goes back to this idea that we're an interpenetrating set of selves and we tend to only hear from the verbal one so there's all kinds of other ones in there that are competing and and cooperating and everything else and and the other thing i can say about this so that's more questions than the answers i guess but but the the other thing i think that's interesting is that you're you're right in the sense that the one kind of entity that we have direct experience with which is the verbal you know sort of one that presumably we share is completely ignorant of most of the things that are going on there and so the space that you're working with nicely coarse grains over all kinds of stuff you know your liver function and everything else you don't need to micromanage all this however there are techniques that will add axes to your option space so if you would like to learn to uh also consciously control your heart rate or your you know skin temperature or whatever you can do biofeedback there are tools that will enable you to cross that that that boundary and you will now have a new axis in your option space that you never did had before do you want it i don't know there's a good reason why most of these you don't you don't use it's it's a good to sort of simplify your space but you know there was a there was a there was a cool experiment where they they measured the temperature between a rat's ears and they gave it a reward proportional to the difference in uh in temperature and so rats readily learn to to get their ears to you know i think five degrees celsius is something different between their ears to get the reward right so now you know that's amazing for a number of reasons one is that okay now now you've added uh an effector in your space that is is you know kind of this physiological blood flow i don't know how they how they do it um that you didn't have before the other thing of course is the credit assignment you know here you are you're a rat and you just got some reward and so well let me see why did i get this so well my tail is pointing up and my left you know fingers are clenched and my whiskers are going which of those things actually worries right so so this is we still haven't cracked this in in machine learning is how how living things are so good at figuring out what exactly they're being rewarded for the small number of trials actually so which you can i think i think you can break through that at least to some extent and you know break through and actually even further right so there are claims that certain types of um and you know mental practices will have effects on your immune system for example right so you can i think you can go pretty far down right from maybe if you wanted to add these these kinds of things to your option space oh interesting actually just bouncing on that going that's really helpful actually i've not even thought about that but that that different parts of us ties in with there's this big push all the time if you say about integrate integrate integrate and actually we need to scale out because i do a lot of work with multi-service service providers say in africa or other countries where they're and you tend to find that any type of transdisciplinary project has the problem about whichever regime of attention or field of practice the voice is in at that time and then then when we talk about integrating but it normally means they dominate so what about configuring so think about this idea of configuring different fields of practice i different ways or different ways of knowing and also ways the other voices as strategic conversations with communities who don't normally get a voice who know something in the state space beyond what the the actors who've got the job in the agency know and it's this sort of ties in a little bit with what you're talking about here so i think it's quite interesting in the sense of can we can because in a way the biology isn't trying to integrate it's actually configuring and allowing things to be but it has a way to still know they're different heliological spaces and either just let it get on with it or yeah knows where and when to get involved i suppose is the thing yeah i mean so so so i think i think the the one way of looking at that is to is to ask what are the optimal policies for for this binding so so so i'll just you know i'll give you an example so so in the case of in the case of the multi-cellularity we say okay individual cells are pretty cool but when they connect via gap junctions into a multi-cellular collective they can do these these marvelous things and they resist cancer and you know and turning off the connection and becoming you know becoming a cancer cell and going metastatic is is bad and so what you would really like to do is is to reconnect these cells back in so the thing is that right that that tightly coupling individual units into a global entity avails you of all kinds of interesting things right so it raises the IQ of the whole and gives you all kinds of capabilities fine but one of the things that gets lost when you do that is uh that the larger system now now can dominate the the goals of the lower system and in fact the larger system may have very little uh care for the goals of the welfare of the subunit you know what when was the last time you worried about all the skin cells you're shedding right you just you know you're going to do what you're going to do and if you lose some skin cells you know no no no problem um so what that suggests is that you know maximizing maximizing out this connectivity into some giant borg like you know thing is is and obviously it's been you know it's kind of been attempted on you know on the on the social scale we know how that turns out so you want to uh that somewhere in between there there's a there's an optimal policy right where I hope there may not you know there may maybe there's some no-go theorem it's somewhere that says there is no good policy but but I hope that somewhere in there there's a good policy that allows us to reap the rewards of appropriate kinds of large-scale organization without a sort of blanket collectivism that that that completely sort of you know wipes out the the the needs of the of the lower level subunit so with that I hope is a is an empirical question that we should be looking for what kind of policies give us that that that optimize the various things we want to optimize um josh bongard and I planned a grant at one point for this human flood this human flourishing um program somewhere that that I think it's actually an empirical question can we identify the optimal policies that preserve the needs of the subunits and reap the rewards of becoming a greater whole nice so I learned this word yesterday a Gregor it's the occult concept representing a distinct non-physical entity that arises from a collective group of people and I thought it was really cool and and I think of this concept of emergence so like at what level does this collective kind of body form and I don't know if you've read the information theory of individuality paper by David Krakauer but we really think about you know emergence you know like the emergence causing the formation of the collective entity but then there also has to be like some kind of downward causation at some some point in there like so is there is this some kind of like critical point like a phase transition that starts to occur or when does that kind of start to happen that that bi-directional information flow yeah for sure I mean I mean this this in particular is interesting because I thought about this a little while ago that you know especially um people who were interested in this kind of stuff you know and you can think about the really old concept of group karma right this idea that that not only you have your own individual cause and effect that you're causing but but in a strong sense the group of which you are apart has its own right cause and effect that is that is as real as so so that idea and I don't know how many thousands of years old that idea is I think it is an amazing amazingly prescient because in in western science the you know collective intelligence right the science of collective intelligence and this idea that right the group can be you know a swarm of whatever can be an agent on its own is relatively recent right that people have started taking that seriously in in science but I think I think that kind of thinking must have been extremely old because there are these traditions that take it very seriously that the group is not just a heap of of stuff but it actually has its own agency in the sense that it can even be the subject of blame and reward in the sense of you know in the sense of like reward and punishment or karma whatever that that's a very strong view of group agency that I think we've only said you know scientifically we've only really appreciated recently because I think that's that's pretty cool actually random random linguistic thing here that Gregor I was thinking about egregious and like egregious in my mind is like out of bounds you know but yet egregor is or is not that it's the opposite it's almost like a group thing maybe maybe somebody else can think about that differently but I thought that was an interesting conflict yeah yeah very interesting so I think about this concept of collective karma a lot and really kind of whether or not like I have free will in the in the present like in terms of am I just some kind of computation like my like you know just because I've been doing active inference and so am I just my generative model that that like effectively I can't make any decision because of of my model like whatever input I'm getting at that moment I have to decide based on on the condition of my model at that time and I think about collective karma much in the same way and whether or not there exists some kind of of agency in the present or or not like if all the things that just made me who I am right now have just led ultimately to to be where I am and and in collective karma too it's like a hurricane that group of people caused that hurricane to happen like they created the causes for the collective karma for the hurricane to come so I don't know it's it's something that that is really intriguing to me and you know I don't know if you want to maybe elaborate a little bit more on that yeah yeah and and and so and so two two thoughts about that one is that this goes back to this idea of spaces and the different spaces that each of your each of the subunits is working in one way that and after after I drew this cone and I realized that it kind of looked like the the special relativity cone and all that and some other some other pieces of relativity sort of started started popping into my head in this idea of deforming the space around you right so if you're a mass you deform the space around you which in turn alters your next possible movements and the possible movements of the other masses around you right so by deforming that by your actions you are deforming the possible the possibility space for your for yourself for things next to you for your components and so on and so one thing that I think happens is that these larger systems deform the action space for their lower systems so that to that but so that all they have to do is roll downhill in effect and so if you zoom in so this gets back and I'm going to connect this back to your your point about karma and and and action and choices and so on when you zoom in all you see is physics and you see and you say oh well this this is you know of course that's what it's doing that's all it could do all the gradients are pointing this way that's what it had to do but the reason it can be it can do that is and still get to some global goal is because the higher level system has already deformed that space so I see this and I don't know how I'm going to ever draw this that's a different story but but I see all these different levels as deforming their simpler space that they're working in which then changes the option space for their subunits so that they can they don't have to be a smart this this is sort of Dennett's idea of you know progressively dumber robots all the way down that works if each system makes it so that so that the the the lower level systems don't have to do quite as much of the work and there's a stupid example I always go to from from you know babe from like common everyday life if if if you ever had a friend who tried to stop smoking and they like to smoke at night one of the things they might do is put their car keys somewhere that's really hard to get to why because they know that when they you know at midnight when they want to smoke they're not going to feel like going hunting for the keys and then they could find and then and then that's that so so what they're doing is and now who are you doing this to you're doing this to your future self right you're deforming the space for your future self in that case consciously and of course we all do things unconsciously too but in that case you're consciously deforming the action space for your future self in order to so that you can be on autopilot at that point right you don't have to have the willpower at midnight that I'm going to not do this it's you've already somebody's already sort of twisted the space for you so that you're just going to get your keys it's just too high of a barrier and then forget it so right so so I think I think that that that that deformation of these action spaces is a really important way in which lower and higher level systems relate the lower level systems simplify the action space for the higher level system because they're competent so the higher level system doesn't need all the dimensions and the higher system bends the space of the lower system to make things easier to get to and that and that goes back to your other point that you that you just made about in the moment of taking actions in the moment that are the consequence of all the pressures that you have set up and other things have set up for you where yeah I see this as so if we think about will in the sense of you know free will or decision making or whatever I see this again this like a four-dimensional block where each one of us at any point in time is a slice of it and if you zoom in on that one that that self let that exist at that one time slice your your free will so to speak is limited to none because at that moment on the local scale you are going to do whatever the local forces have arranged that you're going to do right and this is one like a simple example is you know can you control what your next thought is going to be you really can't your next thought is going to be whatever it is so at the local at the local level when you zoom in you know there's there's no no useful sense of freedom there however if you zoom out on the long scale what you what is true I think is that you can take actions repeated actions whether they be practice or whatever they're going to be you can take repeated actions that alter your cognitive causal structure so that over time you have made changes and so so no you can't control the next thought you're going to have now but you can control what percentage of your thoughts are negative or whatever whatever it is down down the line right so so to me that question of choice and freedom is very much relative to what scale you're asking what if you zoom in to the to the to the low you know to the to the physics and that sort of short time scale no I don't think there's any useful sense of that but over the long term absolutely because you can make changes both in your environment your own structure that will get you to a different place thanks uh Stephen yeah this is really helpful actually to hear it thought about this way and just as we're saying there about this sort of almost transcendental um or the the way we often think of things being excluded egregious as Sarah was saying and um when when we we we need to move beyond um the inner outer you know there's an inner thought process and there's an outer world but the two are linked like we say so with active inference the idea is that I don't necessarily purely have a model of those cones in my head I I but I've got an action model for how to draw them I haven't got the whole thing that can be dumped down like a data source but I have a way to do that which sort of ties in and I was one and when you mention there about looking at the future one thing that adds that active inference hasn't really done is it sort of talks about say the car keys example is is is the sequence at which something might have to happen to get the future being disrupted which is kind of that quantum contextuality idea whereas normally in active inference they talk about the niche construction or the niche modification but not just about modifying itself is modifying the trajectories you might take through that niche which that that gives you more near time possibilities for like sculpting what can happen when it can happen so that I think that's quite useful when I was wondering how how you see that relationship to niche construction and whether and models being sort of embedded in the world i.e. so systems are really models in the world that we then read or interact with using our structures how do we how do we link up that kind of cone with the kind of niche construction that's maybe being used in other areas of active inference yeah I mean so that's that's a very difficult question I can sort of address a small part of it I think at this point and I can you know I think I think one of the key questions of all of this is to what extent but when we look at systems and we see the dimensions with let's just do the third person case first and then we'll talk about that I think the first person case the years are coming so when we look at a system and we make a map of it's problem solving in some space and it's goal directed activity in some space the question is is that in any sense objective in the meaning meaning could we could we make a case that that model of that space is better or the best or is is there an infinity of ways to look at it and in particular how does that relate to now the first person perspective of that that system itself what does the space look like it's it's a little bit like um you know the the the whole um well the umbel thing right where you're sort of in some space what does it look like to you and Chris Fields may have made a really nice example uh in in our um computational meeting a couple weeks ago where you said imagine imagine a bacterium and there's some sugar gradient that that the bacterium is in and it's you know it wants to have more energy so now one thing we are tempted to do is to draw a space whereby rotating its flagella whatever else this thing is going to move up the gradient but there's something else it can do which is that it can turn on a gene for an enzyme that lets it use a completely different sugar or let it use that sugar better or something else that solves that same problem with an effector that to us is an entirely different space so so so we look at this and we say ah that solution has a problem in um in physical space and it has a solution in transcriptional space if you're a bacterium is there any sense in which those are not part of the same space I mean I have no idea what it's right obviously what it's like to to be that bacterium one could make an argument that our dis dissection of these things as two different spaces is is totally biased because that's we were scientists and we like to look at things that to the bacterium both of those are effectors that live together that's kind of a mix it's almost like um you know it's almost like you can ask about uh there's some sort of PCA right if you're a principal component analysis where you're learning about your world and you're going to end up with a picture of what the axes are don't necessarily make any sense to to us looking at the system right you you find you get the you get these components that you know these control knobs and we look at what the hell is the you know what is it doing but but but but but to the system those are efficient ways to build an axis and people like um um you know Robert Prentner and Don Hoffman have these models of these cognitive systems from scratch sort of building these worlds through these virtual spaces for themselves right that you don't actually know what spaces you're in so um i think there's a research agenda there which is can we build some sort of appliance meaning some sort of i don't know machine learning thing or something that will try to extract these in a i'm not going to say unbiased but at least unsupervised manner that can we build an agency detector can we build something that looks at a system and says i tell you what if you assume this axis this axis and this axis then i can paint you a really efficient uh search strategy that will allow you to predict what the system is going to do next because we can view this as problem solving in this space right and maybe and maybe it generates a palette of um solutions like that in the and there's one that's head and shoulders above the others and you say fine that's my scientific theory of what's going on here and and maybe right and maybe that works i don't know so that's that's that's a research kind of agenda for the future and the other cool thing about that is if you if you do that of course one thing you might want to do is then build a more sophisticated uh synthetic intelligence by sticking that thing it basically closing the loop and sticking that that module inside the agent itself and here's why i'm saying that all animals i think uh you know at least past a certain point certainly have agency detectors this is why we love our visual systems love symmetry and so on because you really want to know looking out into the world you really want to know what are the passive elements and what are the other agents that might eat you that you might communicate with that you might convince them to do something else they grant so so we are constantly estimating um you know theory of mind right and agency for all the things we're looking at because because that helps you get around you you know you you need to know if it's if it's a rock or if it's something that's just going to come eat you no matter what or if it's a conspecific that you might actually lie to and and or or you don't have a relationship whatever you're going to do some sort of complicated interaction right so we're all we're all trying to gauge that and probably where what what what one of the things that kind of module also does is turn inwards to tell you stories about yourself and the reason we know that exists is because of confabulation so just as a simple example as is a modern they're they're older examples from the brain split brain studies but the modern example there's um there's somebody who's got an electrode and this was this is an actual experiment that was done yeah epilepsy or something and they put an electrode in their brain and his brain and it and it happened to land in an area that when you stimulate it stimulates laughter the person starts laughing right so so so you're sitting there so this this person is sitting there talking to you know having a serious conversation with the doctor so many off scene pushes the button person starts laughing the doctor says to him why are you laughing zero percent of the time the answer is jeez i don't know that was super weird i was all serious and suddenly my mouth starts laughing that never happens what happens is the person says oh i came up with a funny story and i was just thinking of a funny thing that happened right so what happens is and this has been you know i'm not the first person to point this out that that we are massively unaware of lots of stuff that goes on in there and we have a module apparently whose job it is to tell coherent stories about what that all adds up to so not not just turning outwards to see what kind of agents we see out there but actually to look inwards and come up with plausible explanations that no you're not an automaton because somebody pushed a button that's not that that's not a preferred explanation the preferred explanation is you're an agent you come up with funny stories and sometimes they made you laugh that's a better explanation so so so we have this so so if we had such a thing we could imagine already piece by piece starting to put this together into what might be a fairly sophisticated cognitive system that has a bit of metacognition and a bit of much like all of us sort of faulty access to what it is that we're doing and and all these kinds of interesting things nice so i'm gonna kind of take it back to this time space cube because i just really can't wrap my mind around the fact that if you slice up the cube i think about it like a like a confocal z-stack so if you have a slice that's the present if i can't change this slice like you know you can take a projection of the confocal image and look at the whole picture if i can't change this one slice how is it that i'm going to change the future right so all the slices up until this point lead me to make this current existing slice and then it do i can i change any point in the future i don't see how that's possible if i can't change the point in the present so are we just like i think about we had shanna dobson and chris fields on and and we were talking about express you know compressing and expanding time and space um and just time and all kinds of contexts like is it even a construct or how does it work and like non-recommission time and all that stuff so i don't know what uh if you want to maybe unpack that a little bit for me because i'm just not kind of seeing the possibility of um of this changing the future without being able to change the present yeah totally possible and so and so for sure i'm not going to try to make the argument that uh we have a um a mature theory of of you know free will of that of that kind i don't think we do but um and i think you're absolutely right in that that that is in fact a really critical point if if if your freedom in each slice is zero then many times zero it's still zero right so totally totally true i think we can make two moves and again this is this is super early i you know i'm not claiming this isn't any way you're gonna survive uh you know careful careful thought into the future um i think we can make two shifts one shift is that i think it's again and then that you know like a broken record with this stuff but but i think i think that binary um this binary distinction between you have freedom or you do not i think that's wrong so so i think what you have it goes by amounts so so you may have very little you know if you're if you're a if you're a mechanical clock you have very little and uh if you're some other type of living thing you may have more and then you may have more than that so then what you have is and this is super super amateurish but but just a bit kind of visualized what i see is the kind of um addition that you get in calculus where you start with extremely tiny things that are kind of zero but not really and eventually they add up to something right that's how i see it that's how i think that what you have at each point is a teeny tiny amount and if you look close you know basically it's an epsilon that's not worth talking about it's basically you don't have any but it isn't quite zero and over time that mental effort whatever that is that you put into you know working on your free will or on your kindness or whatever you're working on over time it integrates into an area even though you're integrating infinitely thin strips like that's that's the vision that i have in my head that if we just do away with the fact that it's actually zero that i don't think it's actually zero then you can build up something over time and you can and you can magnify and if you know i'll say one more thing which if we're gonna if we're gonna abuse physics like this right which i mean i realize this is all like completely informal and whatever if we're gonna if we're gonna abuse it i'll make i'll make one more one more analogy if you think about what is the uh what what is a minimal level of agency so so sometimes people ask me like um well is there a zero on your on your continuum right isn't our natural zero and like what's the minimal what is the smallest it's okay if you're gonna pick if you're gonna design the absolute minimal piece that still has some non-zero agency what does it need i feel like it needs two things it needs some minimal amount of decision making that isn't obviously caused by local causes right it needs at least something that is that that reaches outward in terms of space or time or complexity or something that isn't obviously a locally determined um you know necessity that that's one and the second thing it needs is some amount of goal directed activity something that looks like a goal directed activity so so once you've said that it seems to me that that individual particles already have this right because because quantum indeterminacy gives you the first one and least action principles give you the second so you've already got that and so now when you so so i think and again this is you know amateur as as far as the physics goes but my gut feeling is that there isn't a zero that you already start with some minimal amount and now you can do one of two things you can see you got these you got these particles you can do one of two things you can you can make a rock out of them so you aggregate them in a in in one way which basically gets rid of all those nice properties and it and and it and it ends up having having you know very low agency it doesn't you know it just it just aggregates all of it in a way that doesn't do anything useful or if you're if you're alive you can you can amplify those properties and end up you know further down on the on the continuum so that's kind of my fuzzy story at the moment is that i think we start off at at non-zero and then you can either stay there if you don't have the right organization or you can amplify the hell out of it and and get you know to get to be more more agential nice thanks i definitely think that uh non-zero is the right answer so we've got uh maybe like 15 minutes left and um you know i just there's some closing thoughts some room for some last minute questions uh sarah or steven do you have any last minute things you want to ask no my head is sufficiently exploded yeah no i think i think that was a night i mean i think was a lot to think about because we've covered an awful lot of territory but um i think i think it's i was interesting to see that you're bringing the thinness and um i know kyle friston talks about that um so i i suppose one last bit would be would you call yourself um a non-dual monist in the way that um kyle sort of talks about that idea um or is that something a bit different or maybe not even relevant in this context yeah that's that's a good that's a good question so so i hesitate to put a name on it for two reasons number one um everything that we've talked about so far has been very much from a functional third person perspective so so we haven't really touched the the so-called hard problem of consciousness or what the so in order and the thing is nothing that i've said touches this question of why is it like to be something that's one of these systems right so from that perspective assuming that we believe i mean some people you know will say that that's a non-problem and that there is no such thing and then whatever but i i'm not 100 convinced of that actually and from so so so there are that i i do think there's a hard problem and i don't have anything that addresses it and so uh i don't know you know what camp that puts me in the other the other thing is that um i have a lot of um sort of uh kind of panpsychist sympathies in the following way so i think that if you scale down if you if so so one so one reason people don't like panpsychism is like it it means that you know it means that you think that rocks they have hopes and dreams like the rest of us okay so that's obviously not what i'm saying the reason people go to that to that extreme is because they've scaled down the structures that are needed for agency but they haven't scaled down the expectation so so that's so yes it's it's silly to to to think that rocks and dreams have this that rocks have the same hopes and dreams that we do however if you scale both sides of the equation i'm not at all sure why again if you don't if you if you're not into binary classifications is conscious or it's not i don't that that i think is definitely wrong so so if you have phenomenal consciousness is a kind of um continuum too i don't see any reason why there couldn't be a tiny bit of something that's super hard for us to imagine because we're used to a much a much bigger level of perception and consciousness that that isn't associated with a lot of other systems they'll be normally that you know that the people who normally work in cognitive science would never buy as as as having consciousness so so i have a lot of you know kind of sympathies on that front mainly because i don't buy this binary distinction um and we could talk about this actually interesting i don't even i i'll tell you why a lot basically um bioengineering and the kinds of things that are possible now are they they are really dissolving a lot of things that seem to be uh sharp categories before and i'm not even sure at this point that the kind of distinction between first person and third person knowledge is even sharp and i'll just i'll just draw you a simple picture why and i preface this by saying i i made a picture in this and i sent it to a um a well-known philosopher and he said it was you know horrible horrible and and and and not not useful but but we'll see we'll see what you think um imagine that uh you're looking at there's a brain and there's some electrophysiology being done some electrodes stuck in there and that's it's being processed and you're and you're on the outside you're the scientist you're on the outside you're looking at that data uncontroversially third person right you have no idea what it's like to be that subject you are studying some signals that come off of that fine now what we do is we change this we change the scenario a little bit and uh what we do is we take some of those interfaces that you know they have these interfaces for the blind that either go on your tongue or they go sometimes in your retina but basically they turn uh camera signals into something that's directly plugged into your brain with electrodes okay so now we we take that set up and instead of the camera we plug it directly into the electrophysiology apparatus so now what you're receiving is you're receiving a heavily processed signal but it's coming directly from that brain into your brain and and what's interesting is that people who use these kind of sensory augmentation devices they will eventually report that it's just like seeing they they learn to get around like the the electric lollipop is this thing you stick on your tongue and it shocks your tongue in a way that um maps onto the pixels of a of a camera right and they said oh it's just like seeing it too so uh so okay so now you've moved so you know it's like um uh it's it's sort of you know third person but you know it's maybe like 2.5 person or something and then you can do another experiment you can say well what we're gonna do is instead of using all this um like uh heavily electric you know all this electronics and and processing uh in between we're just gonna connect the brains and we can and we can do this and the reason we know we can do this is that there are conjoined twins whose brains are in fact connected so that there's no you know sort of sort of uh clumsy electronic interface in the middle the brains are directly connected and so now is it okay so if i'm now if my brain is now connected to this other person having the experience it's sort of am i is that still third person and then you ask the question yeah but in your own brain so some people say that's an aberrant you know that's an aberrant case but in your own brain you've got pieces of the brain that have to talk to each other you're not an indivisible monad of some type right you are a bunch of pieces that also have to talk to each other so the left side has to talk to the right side in fact the front talks to the brain you are anyway pieces of the brain having to talk to each other and somehow that ends up being first person so now what we've just done is built a continuum where you can smoothly and we can fill in anywhere between these two systems i can fill in as as you know sort of finally as you want and you can smoothly move from from first person experience to third person that is science because the bioengineering tells you that you can do it so so i'm not even sure that's it you know i'm not sure that's a distinction so i you know it's hard to say uh anyway that that's a long winded the answer to your question wow i i was gonna ask one question now and ask another one um this makes me think you know like related to coarse-graining related to layers and levels of of different organisms like it just what you the kind of example you just laid out you know makes me question whether you can even stratify in that way or whether it makes sense to stratify in that way um but let me go back to um something we didn't cover and and i i heard it on your podcast and um you ended with a question related to moral philosophy um or ethics i guess not even i don't know the distinction there but um i mean when you know if you're coming from the perspective of not believing in a binary between you said it better than i did living non-living whatever like agent versus not um and then you're making things like xenobots it's like that kind of if that's if you held those two views um of not believing in the binary then there it seems like there really wouldn't be an ethical question and so i'm wondering where you're where you're thinking is along that yeah no no i think i think there's definitely an ethical question uh and and i think it's not quite what we think it is but and i'll tell you what it is but but let's just let's just take a step back so so long before anybody made xenobots we made humans right you're the old fashion way and so this is like uh you know then it's a competence without comprehension right so we we have no idea how it worked but for for you know hundreds of thousands of years we made other humans and so we take care of them as best we can most of the time sometimes not really very well you know so so so we we already did that then uh we make all sorts of animals in terms of um of food production and other things we make hybrids so we make mules that are animals that never existed before right we make uh we've made new new plants so so so the first thing that's really critical and often the reason i'm sort of sensitized all this is the people always say oh my god you're making xenobots it's a new like let's not let's not forget what we've already been doing right and let's let's be super clear that that this is a massive problem with the with the food industry right that you belong before we get to worrying about xenobots is all kinds of things that that we need to fix so so i just want to be you know i kind of want to be clear that we've already been making animals and in fact other humans for which we sometimes take good responsibility and sometimes we we don't i think i think this is absolutely uh raises an ethical problem and the ethical problem is is the following when we um when when we go to uh synthetic biology and and and bioengineering kinds of seminars there's usually now a session that's about ethics and the session goes like this somebody will show a brain organoid made of human cells and someone will say oh my god that you shouldn't be doing that and somebody else will say well let's just see how much like a human brain it is and then they and then people spend a couple hours arguing about whether it is or is not enough like a human brain that they need to worry about it and maybe they end up saying it's fine or maybe they end up saying it's not fine but i think the much bigger issue is that whether or not something looks anything like a human brain is a very poor guide to how much you need to worry about it and in the past uh the way that you the way that we would figure out whether or not how much um moral responsibility we have for a particular system went roughly like this you would sort of look at it and and if it was um squishy and and and warm and and furry you would say yeah and if it was metallic and you know and it came from an assembly line you'd say do what you want no problem right so so two things two things used to be a guide and even that we didn't do so well we can obviously with those principles but but two things that used to be a guide were where did it come from did it evolve or was it designed and what is it what is it made of and and it was a and that was pretty good right you could the idea if because if it was evolved you could look on the sort of the the tree of life and you could say it's a worm we don't need to fill out any forms when we do these experiments or you could say it's a frog i need to fill out a lot of forms before we can do experiments right for that then that's in fact how it works because you have because you know something about that that lineage and you make some sort of guesses about where things are and if it's an octopus people really don't know what to do and so on all of that is going completely out the window right so in the next uh in the next yeah i don't know a couple of decades we are going to be surrounded by and this is another thing that i'm writing on the option space of possible creatures we're talking about uh hybrids cyborgs hybrids there's a million different ways to recombine evolved designed living non-living and software agents into new forms that have never existed before so what something looks like is no guide to what the cognition is where it came from meaning evolved or designed is going to be a bad question because half of the stuff we're building with is now evolved and and so on um all all of the things you know it's just a machine versus oh but you know it's a nice mammal all that stuff is going to go out the window so to me the ethical problem is much bigger than just asking about whether something's like a human brain the what we really are going to need is a new ethical system for learning how to deal with agents that look nothing like us and that don't look like anything we've ever seen before and yet are going to have all sorts of cognitive capacities and uh we have to let go of categories like machine like robot like um all these other things that really don't mean anything that they never did but before but it was fine in the past it's no longer going to be useful um and and that's that's the ethical that's the ethical problem wow uh so great so i'm going to go start a wall and try to integrate this information uh it was a lot um thank you so much for coming on thank you so much yeah this was really fun yeah great a great great discussion thank you so much thank you thank you thank you thank you