 Hello and welcome, everyone, to the Active Inference live stream, I believe we are live. It is Active Stream 9.0 and it is November 18, 2020. I am Daniel Friedman and I will be doing a solo Active Stream today. So welcome to TeamCom, everyone. We are an experiment in online team communication, learning and practice related to Active Inference. You can find us on Twitter at inferenceactive, at activeinference.gmail.com, our public key base team, at our YouTube channel, or through any other mechanism. This is a recorded and an archived live stream. So please provide us with feedback so that we can improve on our work. All backgrounds and perspectives are welcome here. And for video etiquette on live streams, make sure to mute if there's noise in the background, raise your hand so that we can hear from everyone on the stack and use respectful speech behavior. So just a quick announcement before we jump into the paper discussion. You can find on our Twitter information about the rest of our discussions for 2020. We have six more group Active Streams in 2020. On November 24th and on December 1st, we're going to be in Active 9.0 discussing the Projective Consciousness Model and Phenomenal Selfhood. And that will be the paper that this Active Stream 9.0 is context for. In Active 10, on December 8th and 15th, we'll talk about the paper a variational approach to scripts. While in Active 11, on sophisticated affective inference, we'll talk about more modeling and math again, and that will be on December 22nd and 29th. So that will close out 2020. And then we look forward to a whole new panel of events and projects, collaborators for 2021. So find out more on our Twitter that way. Let's talk about what we're going to do in Active Stream 9.0, which is right now. So the goal of Active 9.0 is going to be to set the context for 9.1 and 9.2, which are going to be the group discussions that we have on these papers. And the paper that we're going to be discussing for 9.1 and 9.2 is called the Projective Consciousness Model and Phenomenal Selfhood by Willa Ford at all 2018. And there's a link that people can see. The video that we're about to jump into right now is an introduction to the context of some of the ideas presented by the authors. It's not a review or a final word. It will contextualize some of the ideas, math and vocabulary that might be interesting or important to know for reading Willa Ford at all 2018. And the punchline that they're going to work their way towards is that we might be able to combine fields such as geometry, cybernetics and neuro phenomenology using the framework of active inference and the free energy principle to potentially understand or explain consciousness. So that's going to be up to you, up to us, how we feel about these kinds of claims that they're going to get to in the paper. But that's where we're headed. And the sections of 9.0 will be as follows. First, we'll go through the keywords and hopefully provide some background on the different keywords, whether it's your first time hearing one of these words or whether you're in one of these fields. And then we'll talk about the aims and claims to the paper, go through their abstract and the roadmap, the sections. Then we'll talk a little bit about consciousness from the author's perspective. Just what do the authors say is important about consciousness and what do they think are invariant aspects of it? Well, then walk briefly through the figures, just try to understand what are these figures even doing? How are we going to get a first pass on understanding what these figures are all about? And then in 9.1 and in 9.2, we're going to stay with this exact same paper. So save and submit your questions and feel free to get in touch with us if you'd like to participate, whether that's in 9.1.2 or in a future endeavor. Let's talk about the keywords. The first keyword is actually not on the paper keywords, but is what we're about here is active inference and the free energy principle. That's what we'll try to tie so much of this back to. And here were the keywords that were listed. Cybernetics, projective geometry, consciousness, neuro phenomenology, first person perspective and perspectival imagination. So cool terms. Maybe it's your first time thinking about math and geometry in the context of consciousness. Maybe it's not. But hopefully as I go through these following keywords, I hope that this is interesting and exciting, whatever level of familiarity you have with these concepts. So first keyword is cybernetics, and it's also written there in Russian as cybernetics. And there's many resources to read on for the history of cybernetics, some of which we're going to talk about here, because in a lot of other active streams and other locations, you can probably hear about the mathematics underlying cybernetics and about how it's related to control theory, both things that we've talked about on this stream. But let's talk a little bit about just more historically or from a cultural perspective, what is cybernetics? What is it trying to do? Cybernetics is a school of thought tradition of action, however you want to think about it, that's related to the action policy selection and control of goal oriented systems. Now, there's various orders of cybernetics, which we can sort of walk through here. And I'll go through a couple of different resources on cybernetics and hopefully you'll see soon why I put a history citation on the bottom left and a little bit of Russian on the top. And it's not just because we have great Russian colleagues in team com. Here are the orders of cybernetics. There's probably different ways to state them or think about them. First order cybernetics might be thought of as self-regulation without detailed observations. So this could be like a negative feedback system that is able to maintain a semblance of homeostasis in virtue of being an isolated or a closed process. Like there's something that gets triggered when it's too hot and then as soon as it's triggered that it's too hot, it instantly kicks in a cooling program. Second order cybernetics is self-regulation that can be influenced by observers within a domain of control. So that could be for example, continuously varied thermometer that's able to enact different kinds of control policies as a function of continuous inputs from the thermometer. Third order cybernetics is self-regulation that is influenced by observation with a relational basis or an ecological grounding. So the system in this third order cybernetic loop is able to acknowledge its surroundings and learn, develop and evolve. And fourth order cybernetics is when self-regulation involves the enactment of multiple different realities or adjacencies, affordances, counterfactuals. A lot of ways you can think about that in terms of a functioning creative observer. So these are deeply contextualized and integrated systems that are doing higher orders of cybernetics and I'm not a stickler about which order is which number. I also on purpose drew the parallel with the one, two and three loop thinking which is related to some decision-making theory from innovation and the entrepreneurship space but I just thought it's interesting. Here's a figure from the book chapter cybernetics in the 20th century on the bottom left there and it shows a map of concepts and how they're related to cybernetics. So on the left are sort of two inputs to cybernetics which are a control theory and the idea of control and then communication theory and the idea of communication and information. And so those are two branches that we've talked a lot about on this screen and we will continue to be talking a lot about because control of action and communication and message passing, these kinds of things inference is the thread that runs through them all these kinds of things are critical for control theory for cybernetics, for the kinds of systems we wanna get into working with. On the right side, we see cybernetics branching back out into the machine area with the engineering sciences and the planning of for example complicated factory operations but as well as into the animal and the social levels. So the animal cybernetics is about for example, the self-regulation of behavior and the social cybernetics could be about the way that our governance is run and there's a really interesting book, Red Plenty that's kind of like historical fiction about cybernetics. Speaking of history, this is such an interesting plot and it was in the cybernetics book, figure one six. It is the usage of the term cybernetics and control by years in the text of populations publications indexed by Google Scholar. So caveats, maybe people are speaking about it in different words or it's being used a different language not English or there's some other way of saying it but look at how similar these things are in terms of their trends where both them ramp up gradually leading to an early 2000s, a lot of discussion of cybernetics and of control potentially looking backwards a couple of years. So think about these peaks in the early 2000s as potentially reflecting something at the close of the 1900s and then how they both start to drop off and this is consistent with something that a lot of us have experienced which is that cybernetics is a term that a lot of at least English speakers may not be too familiar with but we can look at the journals that Friston and others have published in. We find a lot of connections to cybernetics. What is going on there? So to get a little sense of what is going on two other resources to look into would be this big history understanding of complexity information in cybernetics and then this book which I'll read a quote from right now which is called From New Speak to Cyberspeak a history of Soviet cybernetics by Slava Garovitch and the author writes in this book from 2002. The history of Soviet cybernetics followed a curious arc. In the 1950s it was labeled a reactionary suit of science and a weapon of imperialist ideology with the arrival of Khrushchev's political thaw. However, it was seen as an innocent victim of political oppression and it evolved from a movement for radical reform of the Stalinist system of science. Pretty interesting. In the early 1960s it was hailed as a science in service it was hailed as a science in service of communism but by the end of the decade it had turned into a shallow fashionable trend. Using extensive new archival materials Garovitch argues that these fluctuating attitudes reflected profound changes in scientific language and research methodology across disciplines in power relationships within the scientific community and in the political role of scientists and engineers in Soviet society. His detailed analysis of scientific discourse shows how the new speak of the late Stalinist period and the cyber speak that challenged it eventually blended into cyber news speak. So maybe there was a book summary of this work but pretty interesting how that plays out and so definitely we'd love to learn more about this whether we have Russians speaking colleagues who could fill us in or anyone else. Next keyword, projective geometry. What is projective geometry about? Well, of course you could go down the rabbit hole with this topic but let's just try to keep it constrained to what the authors are going to use it for in terms of the paper. They write, the concept of a 4D four dimensional projective transformation is central to the model as it yields an account of the link between perception, imagination and multi point of view action planning. Great, okay. So what is happening here and how can we understand this and its relationship to projective geometry? I'm gonna introduce just two ideas here with this visualization. The first, by looking at the bottom is the distinction between the Euclidean and the non Euclidean types of spaces. And then the second idea with a purple arrow is this idea of a map or potentially you could call it a mapping because it's actually a process and it's actually a certain type of process. It's a projection. So when we're talking about projective geometry we're talking about geometry which means the shape of things and the spaces that they're existing in. So as opposed to topology which is how things are connected with nodes and edges geometry is about things and their shapes and their spaces. We're gonna talk about projections amongst different geometries specifically potentially this one between Euclidean and non Euclidean geometries. And here's a quote that they write authors do. They write, this prospectival phenomenal space so the space in which they're going to say that experience occurs within phenomenology occurs within a space allows us to locate ourselves and navigate in an ambient space that is nevertheless essentially Euclidean. However, in a Euclidean space there are no privilege points, no points at infinity, no horizon and an arbitrary origin. They say there's no non arbitrary origin, same thing. This very strongly suggests that a non Euclidean frame must be operating in the phenomenal space with the help of sensory motor calibrations to preserve the sense of the invariance of objects across the multiple perspectives adopted and relate the situated organism to the ambient Euclidean space. So what is it that allows me to continue thinking about this as a pen, even as it comes very close to my face and looks very weird. So what is that generative model that allows me to act like everything's normal as objects maintain their invariances as the eye travels through the space. And so we can contrast this Euclidean space that they're supposing that we exist in. They're saying we exist in a space that doesn't have privilege points of access, no points at infinity, et cetera, et cetera. And they're saying the space that an organism, for example, moves through is Euclidean, while the space that their consciousness is being projected into experientially is non Euclidean. What about it is non Euclidean? Well, let's just take the opposite of what's in red and put it in blue. So there are privilege points from the point of view of the observer. That is the privilege point of view, quite literally. There can be points at infinity. There can be points that sort of just like they're so far away, they're infinitely far away and not just from an affordance point of view but from a perception point of view. There can be a horizon. In fact, we see it every day. And there can also be a non arbitrary origin, which is a little bit of a restatement, the idea that there could be a privilege point but basically it's privileged. It's a non arbitrary origin. These are pretty similar concepts. Now non Euclidean, it's like saying non linear. And the joke about this is things that are not elephants. That's the non elephant class. You'll know it when you see it. So that's about how helpful it is to say it's non Euclidean or it's non linear model. Okay, we get it. It's not a linear model. It's not Euclidean geometry, but there's many choices. And the reason why even though it sounds like, wait, you're barely ruling anything out, we're actually ruling out very specific parts of the geometry of the Euclidean space. So in the Euclidean space, parallel lines don't intersect but that's not necessarily the case for certain types of geometry or a triangle with three edges and three angles is 180 degrees from the sum of the angles. But on a sphere, it's not that way. So that is what is happening with these geometric projections is we're gonna be projecting from sort of the grid. They wanna think about, now you might wanna ask Bucky Fuller if we're actually in a Euclidean space or if it just seems that way because of public education, but for the purposes of this paper, we're in a Euclidean space as action and that's the space we wanna be making our perception simulate, so to speak and to act like everything is normal and to decrease our uncertainty about a Euclidean lemapped space while also relating it to, through a projection, something that does have a privileged point of access and something that is, as they put it, informed by sensory motor calibration and relates to our action affordances. So that's how you can have a Euclidean concept of for example, a gap in the ground that you can't jump over. So it's a very complex estimation of what you can do and how big the gap is and where you are, how much run up you would need, et cetera. All of that is gonna come together by doing what they're going to talk about here. Now, consciousness. It's great to talk about consciousness and I'm gonna go into a little bit of detail soon about some of my own thoughts on the issue, but when talking about consciousness, it's important to be really clear. Even though it's something that we're not always sure about, it's always important to at least use language very carefully and use our scholarship and our intercultural respectfulness very importantly because as I'll hopefully come back to a few points soon, it's just a critical issue and we can't think of it just flippantly. The authors write when talking about consciousness because I wanna just go with what they think about consciousness for here. They write, we might divide contemporary neuroscientific and psychological theories of consciousness into three main types, the bio-functional, the cognitive neuro-functional and the information theoretic. As should be clear, the project of consciousness model, the PCM, incorporates the emphasis on self-awareness common to many approaches to consciousness as well as an emphasis on the biological functions of consciousness. So they're defining the space a certain way. I'm gonna define it a different way soon, but they're defining it that way. Then we're gonna insert the PCM here and then they write at the end. The PCM thus subsumes and unifies the main insights in all of these approaches to consciousness. Above all, what we add to preexisting theory is a geometry. The thesis that projective transformations and projective frames necessarily subtend the appearance and workings of consciousness. So whatever we're going to learn in this active stream or whatever we think the PCM is doing and we wanna evaluate it not by what it doesn't do but what it does do and what it claims to do, what it sets out to do, that's where they're trying to step in. Their understanding of literature and consciousness is that they broke consciousness down into these three kinds, which citations exist there, you can read the paper. And then they're gonna step into that gap and then they're gonna make the claim that they're subsuming and unifying the main insights from these approaches. Where does that leave these approaches from a theory perspective question for another day? But that is where the authors understand their work to be fitting in. So if you don't know what the PCM is or it's kind of trippy to think about what it means to do mathematics and geometry of consciousness, that's perfect. Here is the blank that they're gonna fill in with their work. What do I think about consciousness though? And I wanna be clear about what my stance is or at least just give the tip of the iceberg on my stance because this is a first person active stream and there's some relevant research I'll discuss in a few minutes. And also I've just for a long time been interested in the science of consciousness but also the second layer of how it's communicated especially from scientists. So there's a lot of places to go here and a lot of places to start. I'm just gonna walk through a few general thoughts and raise a few questions, really more questions than answers and then I'll discuss a paper from the last couple of years with a colleague, Eric Salk. So there are many ways to think about studying consciousness and one path that you could go down. This is sort of, you know, across cultures through the time one path you can go down is thought experimentation. So this could look like meditation, it could look like a specific thought experiment or some other sort of insight approach or a gedankin approach. And whether it's culturally called one thing or another is not as much my concern here. The thought experiment approach doesn't involve making interventions, for example, in the outside world just thinking about consciousness, you know hearing TED talks and stuff like that. So the issue is that whatever is actually happening if that's what we're pursuing we could just be led to the wrong conclusion. And then if there is some element of free will or interventionary role of consciousness then we could intervene to end up believing that we couldn't intervene or any of these combinations. So I just never quite understand the relationship between what one directly feels convinced about with respect to a thought experiment or a thought or an idea and how that might relate to how it actually is for people who aren't that person. And one aspect of this is that it's really impossible to know which aspects of thought experiments can be transferred to other situations. So for example, one of the most famous papers in this area, most cited papers is a 1974 paper by a philosopher named Nagel. And that raises the question, what is it like to be a bat? I don't think he invented that question but he's the one who got famous for it. And in that paper it's written an organism has conscious mental states if and only if there is something that it is like to be that organism. Something it is like for the organism. Now, you might think, what are philosophers doing? Because it sounds like a sort of a weird question. And I understand that many careers and a lot of good thinking has come out of this question. So I wanna optimistically see it as a catalyst in a seed. It's always good to have simple questions to return to. It's good that it's such a simple question and it opens to so many perspectives. It initiates students into our knowledge traditions. There's so many advantages to having these kinds of classic works. But to be that, to be blunt, I guess, I find the phrasing of this question of consciousness to be quite unhelpful because it is so informal, it's so hypothetical, it's untestable, it's ambiguous but it's specific, it brings a bat into question. And then you can imagine, oh, is it a dog? What does it like to be a dog? Now, these are all good things to imagine but again, they don't really bring us closer to addressing real frameworks for consciousness. And the reality is there's been now a vast wasteland of literature that's just not comparable to other work in the same exact space across different species that essentially amounts to speculation and appealing to what it might be like to be a certain type of animal, thinking, well, if I was really a bat and I grew up as a bat, would I ever like thought what it was like to be a human? How would I? So ties up to all these second level questions that aren't super relevant. And we've always wondered through time, every culture, everyone has wondered what other systems experience, whether that's in a person or an animal or a fictional character or a physical object. So the speculation and the think pieces, they don't really add something substantial to the literature which is what the scientific literature is about for me. So the bigger issue though, with thought experiments of any kind is that we can always imagine things that are just not making sense within a framework or any framework or just don't relate to the world that we live in. We all know that we can be wrong about an idea but believe that we're right. And so to kind of rephrase that in an active inference framework, we can say that these are situations where the precision parameter is very high. There's a belief, a metacognitive belief that the precision is high, that some question has been resolved. This represents that, for example, someone's model of consciousness, whether it came from a knowledge tradition or their own thinking, they believe that they're onto it. They got it figured out, it's resolved for them. But that doesn't mean that your estimate is related to SHAT, the actual state in the world. And I'm not passing judgment on being overconfident or being wrong because sometimes it can be very important and powerful to have these kinds of beliefs, but that's not what being correct is about. And so if you actually don't care about being correct, being right, going towards the truth, and you really don't care about that, then I'd love to hear where you're coming from. The rest of the scientific literature, although with many shades and many differences, is moving towards what actually would be generalizable through truth. So let's go back to consider experimental approaches, which is something that comes up a ton in the philosophy and the science of consciousness literature. Now, it would be awesome to do experiments on consciousness and in some way, don't we all? We could study humans as they behave and sleep and daydream, take psychedelics or they wake up for anesthesia and they develop and there's so many things that could be exciting to learn about. Also, if we can do experiments on consciousness, we could do experiments on animals. Now, we could do experiments that are interventive, like manipulating the brain of an animal or the body or the situation of an animal. These experiments might be seen as inhumane or just to massively violate ethical standards for humans. And so there's a bit of irony in doing these potentially invasive, non-consensual experiments with captive or wild animals when we're studying potentially their capacity to not consent to our discussions with them. It's an interesting question. I won't go down the ethics route here, but a non-ethical question that's related to experimentation is that we just don't know which systems to study or what observables to measure for experiments. And so we're studying something that isn't really measurable or observable, but we'll come back to this idea of observability in a few minutes when we're talking about neuro phenomenology. And the reality is, no matter what you do measure, even if you come up with this one special measurement that hasn't been done before, all experiments are interpreted within a cultural and a scientific milieu in terms of what measurements, conclusions, interpretations are ethical, correct, accepted, valid, discussed, all these things. And even going another layer back about experiments, even assuming that we can measure what we wanna know and measure and everything. Experiments, especially if they're disconnected from grounding in theory, it's difficult to know what is really being uniquely tested or demonstrated or falsified by different kinds of experiments or what we can learn from different kinds of measurements. And there's a lot of ambiguity, a lot of levels the ambiguity can enter into, but just to give a few examples, an experimental result could be consistent with many different frameworks or potentially all frameworks or a set of frameworks for consciousness. So when we really want to be identifying what frameworks we're wanting to work with, it doesn't narrow the space as much to do experiments, which we'll return to soon. And that's kind of related like the neural correlates of consciousness debate, which is, do you put somebody in a neuro-imager and then have them do a no-report paradigm where they don't say, yep, I heard the sound. So you play sounds of different notes, but what if it just subconsciously influenced their brain? There still could be a neural signature of that. How would you know they experienced it? Well, that's called a report paradigm. You ask them, did you experience it? But in some sense, you're just asking them if they remember or if they hit the button. So it gets messy, but that really gets into some of the weeds about what you're measuring. And another example or area where we see limitations on experiments related to consciousness is in animal behavior. So animal behavior seems naturally like it'd be adjacent to human consciousness studies. And we see a wide variety of things being used here, like pulley tests can too of the same animal, pull a pulley so they get a reward. Looking in a mirror can one recognize themselves and take an action pattern to clean themselves in the mirror. And the first thing to note is that whatever a kind of setup you're dealing with, a visual illusion, a cooperation task, a recognition task, memory, planning, they're always going to be based upon the affordances of that animal. And again, it might be ambiguous as to whether they're using some cue that you actually expect or if it doesn't see, it's not gonna be able to be in the mirror test. So at the very least, these tests for consciousness, which we'll get back to, don't help us really study the ground rock of this frameworks for consciousness question, because all you could say is, okay, this bird can do the pulley test, but we don't even know what that means. And then of course, if we find out that potentially experience isn't related as much as we thought to behavior, or one has a model that it's not related to behavior, then you can say, yeah, it's a robot that just acts like it's a crow. All right, I think I'm still streaming, but it looks like my camera froze. I'm gonna just re-flow it in. Yeah, actually, I'll just see if, don't know what it's about, whether that happened last time as well, but I'll just close the camera for now. Actually, let me just see if I can remove the source and then add the camera back in. We're gonna find out live, whether it is OBS being weird, or whether it is the actual USB connection. And turns out it was OBS. We now know and we're right back in it. So to the point about potentially robots or zombies, as Dennett might put it, doing complex behavioral tasks, you wouldn't know whether the experience was being had, even if the behavioral experiment you had lined up for this animal was enormously complex. And robots might be able to do things that we might not expect them to be able to do already, such as learning language or being culturally embedded. And so yet another point here is that before you get too excited about any single experiment that might prove or disprove a model of consciousness, consider it just another grain of sand on a sand heap. And any single grain of sand that you're adding to this heap now could be the one that sets the whole cascade in motion. So by all means, I encourage experimentation. But that being said, let's not fool ourselves in pretending that many of the data points we already have are out there in the world already and that just one more experiment is unlikely to provide any type of knockout evidence for or against a model of consciousness. And so to kind of recap there, there's the thought experiment branch, the philosophy, meditation, insight route, and there's the experimentation interpretation route. And those are connected, of course, experiments are ideas too. But that being said, they both have pros and cons and people will often argue one or the other as if they have the answers from one branch or the other. I hope I've demonstrated that that isn't true. And one reason I think that it's important to have this viewpoint is that we're living in a world just as a prima facie observation where there's some people, humans, who say that they think all things are consciousness or that all things are consciousness in their own special way or that nothing is conscious, including humans. And of course, there's many flavors to each of these viewpoints that could be held or at least communicated. And there's many things about beliefs about consciousness that might not be verbal or expressible in a certain language. So the point is we're in a total state where the evidence in the world has resulted in people believing different things through time, different things, even within the same family. And so let's be humble. The table has mostly been set for us in terms of observations about ourselves and about our world, though it doesn't mean we can't reimagine much of our own experience or the world, but it would really have to be quite the thought experiment or quite the experiment in the real world if it can resolve these longstanding and philosophically grounded differences in worldview between schools of thought and between people. And even if it did this, even if the experiment were so good that it just convinced everyone, it wouldn't even mean it was right. It would just mean that it was convincing as a meme. And so if we're really trying to find out what's right rather than how can we shape our own reality and other peoples, which isn't always the worst task, if we wanna be right, then as far as I know, it's not gonna be the experiment or the thought experiment route alone. And one last point on this slide because there's a lot to say here is that there's academics who believe different things. And you can see these slides and citations and see that they cite different work that says one thing about the global workspace or another thing about information theory. There's also different views on consciousness from different world traditions, from artists, from practitioners, from non-scientists, from non-philosophers, from all kinds of people, children, adults, everyone. This is just to say that we should be very cautious about any kind of hegemonic, especially scientific effort to define consciousness, whether it comes from academia or from some other source. There's just way too much on the table here for people to be making sweeping claims about what is or is not conscious. And there's a lot that is downstream of that. It's really a personal and a cultural thing, what somebody believes about consciousness, what they wanna say or what they wanna commit to. And also while some people might find these topics exciting, I think they're exciting. If you're listening, you probably find them to some extent exciting as well, but many people are not interested in considering it and don't perhaps want to be told this is how their consciousness or experience is. And other people are simply okay not knowing whether they're curious about it or not. And so I'm always skeptical about science says type thinking, which can be often how scientific papers are written, even if it's written in an author's say type thinking, it's about how it's communicated. We've all seen the misuse of science says policy related to everything. And consciousness would be the next level on that, wouldn't it be? So fill in the blanks, think about what kind of power is held by the people who can say what or who is consciousness or evaluate conscious states such as suffering or pleasure. So there's just a lot to it. And then the last point really here is that people can change their beliefs during their own lifetime and go deeper and shallower into different ideas. And that probably doesn't change the underlying nature of consciousness. So it will never be enough how we think at one moment or for a group of people to believe something, I've gone down these discussions and roads many times with people alone and in conversation and people will tell me, well, you know, we're working with our best possible understanding of consciousness given the research at this time. And it's just simply not true. There's so much to know and respect in terms of the research and also the non-scientific literature on consciousness that there is just a lot more to the discussion than well, let's just tell the kids something simple to it. Now to go beyond that and thanks for bearing with me. It was just, I wanted to say a few of those things because a lot of times consciousness papers are approached from non-scientists with excitement. So I wanna provide a little bit more of a nuanced scientists take here on how we can really respectfully study consciousness because this isn't just like Friston at all figuring it out in a paper, but I love the paper and it's interesting. So how do we hold this yes and perspective through nuance hearing multiple people's perspective? Having participation. So what I'm gonna talk about here is this paper that I worked on with my colleague Eric Sovik who's a professor in Norway and the papers called the ant colony test as a, the ant colony as a test for scientific theories of consciousness. So this is how I came to be thinking about a lot of these topics. Now, Professor Sovik's background was in neurobehavior and in genetics of the honeybee. Mine was in similar areas, but related to ants. Eric is also a very pragmatic and a philosophical thinker. His PhD advisor who is Andy Barron at Macquarie University in Australia is a honeybee biologist and Professor Barron has also written philosophy papers that are related to entomology as well as to experience and consciousness. On my side, my PhD advisor, Professor Deborah Gordon at Stanford is an ant behavioral ecologist who also wrote a lot of philosophy works broadly construit. And so both Eric and I had really positive mentors and role models for how we could combine entomology with philosophy. So that's where we were coming from with this paper was how could we combine our direct, ethical, behavioral ecological experience with the social insects with what we knew about neurogenetics and molecular biology and then tie that through respectful engagement with some of the philosophy of consciousness literature. Here's what we didn't do in the paper. What we didn't do is make a claim or a bet on whether ant colonies are conscious or not, whether ant workers are conscious or not. If people have feelings on that, let's hear about them on the active stream. Let's find an ant stream. Let's do something fun. Let's hear about it. But that's actually not what we did in this paper. We do a few things in this paper. The first thing is that we proposed this terminology of a forward and a reverse test for consciousness, which I'm gonna return to in a second with this image. The second thing we did was to review the literature on consciousness from a scientific literature and make classification schemes for different kinds of theories of consciousness. So you can see why I was interested in the breakdown of this paper's understanding about the fields of different scientific studies of consciousness. And then the third part of the paper we used empirical findings from entomology. So observations of morphology or of behavior of individuals or of colonies from entomology. And we asked whether different frameworks for consciousness that we found during the literature review might come to the conclusion that ant colonies and or workers had consciousness or awareness. And we as authors remained neutral on the issue to the extent we could as far as just presenting hypothetical evaluations of colony for consciousness from the point of view of these different frameworks, which is something that's so intrinsic to consciousness. So it's kind of fun. And it turns out that frameworks just simply wildly diverge in their claims about ant colony consciousness. And it's really not a surprise that it turned out to be that way because the same frameworks diverge in their claims pretty plainly and people diverge in their opinions. And so if the frameworks are gonna disagree about humans being conscious or electrons, then they're gonna disagree about ants. And it was just interesting to experience this though and ask how we could have a reconsideration of what we're really doing here with the consciousness studies area because there's thousands of speculative papers about consciousness and animals, but are we moving the needle? And if we are in what direction is that where we wanna be moving it? What do we do in the paper? So we introduced this idea of a forward and a reverse test for consciousness. And we thought about forward and reverse just from the point of view of the observer, the experimenter. So that's very related to this phenomenological invariance idea that the PCM model is gonna return to in a little bit. And this idea that everything in consciousness should be relational. Let's take this perspectival and this relational approach very seriously. So what is a forward and reverse test? A forward test is when you trust your test and it describes the mode of usage of a tool where you're measuring systems and drawing conclusions. So for example, in the figure it says, I trust my test, how heavy are these boxes? And we have this gray test. It says it is an unknown, whether the box is conscious or not. We put it on there and it says, yep, one. And then we go out in the world and we investigate whether different boxes are different weights or have consciousness or whatever. And that is for validated calibrated instrumentation. And it's important for the normal in a paradigm vocabulary, the normal science, which is advancing science. And a reverse test is a different phase. A reverse test is when you don't trust your test yet. And this is like calibration. And calibration is when you take things that you do know about, like we know that the meter is this much relative to this and we know that the wavelength of this is gonna help us calibrate this tool. But really just when you use a scientific instrument it has to be calibrated. And if we wanna have a science of consciousness we have to have a calibrated instrument. And so the reverse test is when you don't trust the test and you don't know what the machine is gonna read out but you put on the one pound block and you say, I'm gonna fine tune this machine so that it comes out with one when I put the one on there and then we'll know that this scale or this spectrophotometer works. And it's kind of this back and forth, this two stroke engine that we've seen so much about. And we've been talking about the back and forth between action and inference, of course. We've been talking about that from the framework of computationalism from the Bayesian perspective, the Bayesian structuralist perspective, inactivism and ecological psychology and just cognition in general agency. How do we do this forward and backwards of more tool like based upon affordances and more inference based upon observations? Okay, so it's kind of a two stroke engine for studying consciousness from a field perspective. So this is like what should the field of consciousness researchers be studying? Now, wouldn't it be great if we had a forward test for consciousness? If we had a scale that we could go, yep, we hooked up this person's brain or this Wi-Fi router or this beetle. And yet it's a two out of four. It's a 2.9 out of, you know, whatever it is. Yeah, sounds great. I understand why people want to mathematicianize and why they want to measure consciousness. But if you meet somebody who has a good test for consciousness, then, you know, get the Tesla tech, get the perpetual motion machine, get everything you can from that person because we don't see it within the current space we're in. We want to have this test, prefrontal cortex, language, whatever it happens to be, no caveats, just a test. Of course it doesn't exist. And so that's kind of what this issue is about. And so when we don't have a forward test for consciousness, even though people might think they locally have one, I hope I've sort of dispelled that with a previous slide, if we don't have a forward test to operate in the action mode, do we have a reverse test to help us work in the inference mode? So that is where Professor Sovik and I wanted to think about different kinds of forward tests for consciousness and then how could the ant colony be a reverse test for consciousness? So here's the framework that we loosely organized into for our literature framework in terms of forward tests for consciousness. So this is like what other people might call frameworks for consciousness. We said, look, all these models, what they have in common is that there's some measurements, some tool like intervention or intervention into the process or measurement that could be done that would help us have a forward test. So like, is this system conscious? Does it have X? Does it have language? Does it have a prefrontal cortex? Okay. Now the reverse test for consciousness would be like, we know humans are conscious. So any theory that says that is fine with me or we know humans are not conscious. So I rule out any theory that says that, right? Whichever one you think you believe, fine. So we separated into a few different categories. The first one was neurobiological. So remember that they had three categories, bio functional, cognitive neuro functional and information theoretic. What we did with our slightly different breakdown of the categories in the space, not a final word. This is just what we had found out about is we divided neurobiological models into the following categories, which were structural models that were based upon macro or micro structure. So macro structure would be like, the cerebral cortex is this size or it has this brain region. So gross anatomy, macro structure. The micro structure is like, wow, there are so many connections on the dendrites or this cell type has this shape. So things that you'd need a microscope to see, macro to micro. So you need the microscope for the micro histology or oh, the architecture of these neural networks is a feedback. That means that it could be implementing something. So macro and micro structure of the brain, neurobiology or of the brain and the body for that matter because it's not always about the brain. Functionalist models for consciousness would be like, there is a function that does egocentric navigation in crawfish and crows and humans. So these are conscious entities. So because there's a functional element of something that's being done, even if it's being done in non-homologous brain regions or two different ways by two different species or in somebody who was born with vision or who wasn't born with vision, it's the function that matters. And then another neurobiological assessment that people often make in terms of forward testing is dynamical features of brain function. So they'll say, well, because the brain is a complex adaptive system with winterless competitions and leaky integration and action policy selection and lock of ulterior, you know, predator prey type dynamics or chaos and landslides catastrophes, power tail of noise, all of these dynamical features coming from usually a complexity or a signal processing perspective. So those are kinds of appeals that people make to the brain and the body. There's also the egocentric and relational frameworks for thinking about what counts as a model of consciousness. And in this category, there's a bunch that we can go into but just to make it kind of clear what I mean by egocentric and relational, which is actually the first invariant principle of the PCM. So totally agree, it's critical. It's the idea of spatial awareness and perception, which is whether it's the lived experience or just the acting as if of things being closer to you being more accessible, but everything in consciousness is experienced in relation to something. That's agreed upon even when people don't even agree what that relationship is or what that something is. So that's kind of cool. Then there are cognitive and behavioral frameworks. So these are cognitive being broadly understood as like things that computers might be able to do or computer networks or extended cognition embedded, enacted and encultured all this stuff that we talk about every week. Let's take that broadly to cognitive. So here by cognitive, I don't just mean what people think of happening between the student's eye and them filling in the multiple choice test. That's not just cognition. We wanna have a broad understanding of cognition, multi-scale, message passing, Bayesian nets, et cetera. And the categories here are, there's multiple, but examples are like emotional or affective states kind of foreshadowing the sophisticated affective inference which will be active in 11. But emotional states like regret or optimism bias and cognitive biases more general. So everything from visual illusions to other types of illusions that are not visual but can be related to the order of things being presented. These are sort of cognitive effects that potentially like a neural network might fall victim to. And so what we've experienced is that there's this runaround and people will say, well, this bird has this brain region. So it's a macro structural claim. So it's conscious. And then somebody brings up a shortcoming of the idea of macro structuralism. Like, well, does it really mean that it's aware just cause it has this brain region? What about sleeping people? Or what about, you know, Daniel Dennett or something like that? And they go, well, right. It's actually the brain region and the microstructure. Okay, so it's microstructural. Great, well, then would anything with that microstructure have that kind of a characteristic? Like if you made a model and a computer would it have awareness? Well, no, it has to be evolved or it has to have language or it has to be implementing something ecologically. Great. So we wanna chase that rabbit down because we're serious about answers here. And so we wanna chase it down. And someone says, oh, when I said language I didn't just mean a chatbot. I mean, they feel emotion. You say, well, but what if the chatbot says it feels emotion too? What are you gonna do? And then all of a sudden that whole feeling emotion communication of a feeling emotion all of a sudden they have a new explanation. So let's keep on chasing down these rabbit holes. And one other category that is closest to what is described in this paper is the mathematical framework. So we took a step beyond the authors in the 2018 paper, the project of consciousness model where they said there's the information theory category of consciousness models. We actually separated a few of these models of mathematical frameworks. So we have decision-making and long-term planning as well as information theoretic and geometrical. So there have been some, maybe it was published similar or slightly after this paper but there have been some geometric theory of consciousness info theory. And then also a lot of times the information theory approaches deal with information geometry. So which one is it? I agree that it's enough to call it mathematical and just say includes topology, information theory, thermodynamics, geometry, just a lot of math stuff. Another area of mathematical frameworks for consciousness could be like cybernetics which is why it was one of the keywords for this paper because a mathematical framework for consciousness might involve decision-making or agency or ability to plan at a certain level of coherence. And in fact, in the 2018 interview that was with Carl Friston myself and our passed away colleague, Martin Forcher, in the last question of that interview, I asked him whether he thought the ant colony was conscious because I was working on this paper with Eric and I was writing the interview with Carl and then I just thought, wow, we could figure out what Carl thinks about ant colonies being conscious. And if you're interested in either fristinology or in ant colony consciousness then definitely review Carl's answer on that. But just to kind of close this slide off, we're committed to long-term clarity on these issues about the study of the science of consciousness. So we really welcome collaboration or improvement on these questions. Let's work with it, let's make an ontology for consciousness or how are we gonna get organized on this issue? Because as I said, there's just too much on the table to let this one slide and we have to enter this space and do it right because otherwise just it goes off the rails. Okay, let's get to neuro phenomenology which is one of the keywords of the paper. Oh yeah, the paper. And on the bottom here, I have a couple just fun images. The left side is the book, G.E.B. by Douglas Hofstetter, 1979 but reads as crisply as ever and really an insightful book. And then on the other side is the Tree of Knowledge, this yellow cover by Varela and Maturana and that's a book about auto-poesis and a few other topics that are really cool related to complexity and just very deeply related to the ecological and activist, semantic, bio-behavioral, a lot of great stuff. Also potentially the work of Fridtjof Capra is one thing to look into here but then some images because it's not all about language. So here's some images by Magritte and by MC Escher that have to do with sort of what is this self. Let's look at how the authors define neuro-phenomenology. They define it as, one, observation and description of the phenomenological invariance at the appropriate level of abstraction. So maybe the appropriate level of abstraction for the experience of drinking water is not a throat cell. Maybe it's not a whole nation. Maybe it's a person. The classification, two, classification of the behavioral, cognitive and bio-functional aspects of consciousness. So some drugs like, when you take acetomethyn and it doesn't make you feel that different. There's other drugs that are psychoactive or that have long-term changes to your brain, whether you experience it in the short term or not. So how are we gonna get a behavioral, cognitive, molecular, genomic understanding of that mechanism? Three, identification of the neuroanatomical and neurophysiological correlates of consciousness. So I see neuroanatomy as the structural whether it's structural at the macro or the micro, neuroanatomy is what Sovik and I called structural. Then neurophysiological is referring to the dynamics and the processes, molecular, informational, whatever they may be that are in the brain. So those are the functionalist models to me of the brain. Four, development of mathematical and computational models of consciousness, which I would consider to be in the mathematical category, that A, faithfully integrate all the invariance described in one, which we'll get to soon. B, provide unifying explanations of intestinal hypotheses about the bio-functional aspects of consciousness described in two, all on board. And C, can be physically implemented in the hardware identified in three, as well as perhaps in non-biological hardware. So I don't subscribe to computationalism. I don't believe in a hardware, software, distinction, except as a useful metaphor in biology. I think the book, WetWare, is pretty interesting on this topic. And the reason why I intervened right after the mathematical and computational models is because I think it's actually a different domain. I think there could be a mathematical model that's extremely agnostic to the biology or our biological model could just be like, it is what it is. And it doesn't matter what the math or what anyone thinks the math is. It is what it is because of evolution or because it passed through some sort of historical filter and any equation you put on it is just like a linear regression. It just, it doesn't matter. So those, to me, they can kind of fly together and that would be great or not. It could be something totally off base, which is why I think, again, it's important that even though we have the citations and the ways that we can sort citations into keyword clusters, let's not think that we've mapped out the space of the possible in consciousness because there's people in other cultures who don't feel the same way as scientists. First person perspective is I think the last keyword and they write about that in the paper. They say ideally this method of the PCM would give us a theory of consciousness that intelligibly links the phenomenology of a first person conscious experience with its realization in the brain or other substrate via mathematical and computational model simultaneously accounting for its bio functional properties. So the bio functional properties is kind of where you got the neuro part. The mathematical computational model, they're saying wouldn't it be good for purposes of describing it, predicting it, controlling it, designing it, explaining variance, reducing our uncertainty, why not? That's the mathematical computational side and then pull it all the way back phenomenology. So computational neuro phenomenology, we're gonna be doing mathematical computational neuro phenomenology in this paper and we're gonna be using geometric tools like projection mapping to think about how cybernetics and active inference are related to classic work in neuro phenomenology. Okay, one more keyword, perspectival imagination. So they quote on this one, they write, this perspectival space of the field of consciousness, FOC, is organized around an elusive implicit origin or zero point and includes a horizon with vanishing points at infinity that mark its limit. So right there, there's the two of the violations of Euclidean geometry, point of privilege, that's the point of view and then the vanishing horizon points out infinity. The FOC embeds vectorized directions that frame the orientation of attention and action along three axes, vertical, horizontal and sagittal. So like, you know, this one, this one, and then that one. These are given in perspective with respect to these same points at infinity. So pretty interesting stuff. They continue and they write, since we are dealing with a 3D perspectival space with an elusive origin, point of privilege, point of view and vanishing points at infinity, constituting a horizontal plane, a horizon plane, that happens to be in the horizontal axis, doesn't have to be, if you're on your side, it's not. The PCM postulates that the field of consciousness has the structure of a 3D projective space in the sense of projective geometry. Nice, thanks for giving it as a keyword too. It helped me learn about it. The FOC can thus partly be understood in terms of a 3G projective frame. Let's look at a few just kind of cool examples related to perspectival imagination. So this is an example of a non-Euclidean geometry. It's an Escher artwork. And it's like these bats that kind of fractal off. And sometimes this could be related to hyperbolic geometry or other geometries. But the point is like, each bat might be flying off. And so that kind of maps onto our experience of a bird or a bat flying away from us and slowing down, perhaps. But how is that gonna get mapped onto another space that may or may not have three dimensions spatially? Again, Bucky Fuller might claim there are four spatial dimensions. So let's not assume there are three spatial dimensions and that the fourth one is time. That being said, it makes sense to talk about X, Y, Z coordinates from a Cartesian perspective. Another example about this perspectival imagination is vanishing point. So everyone who's done a fun drawing course or just looked down a city road or looked out of a car that was driving through fields, everything kind of vanishes. And so you can see the rows of a field and you know that they're all in parallel. But then you can see really deeply down one other ones like kind of move towards it. And another way of seeing it in a cityscape is like you can conceptually know that the street is straight. But if you look at it, it looks like a triangle that's disappearing from you with a point at the top. So what is it that allows us to be like, oh yeah, buildings have 90 degree angles and they're going upright even though all the actual angles are kind of wacky. So that's this perspectival imagination. And that's what makes an artist who can make a lifelike drawing, especially like an architecture or something, look lifelike because they capture these extremely subtle features about vanishing point and perspective and all these different things. So one drawing that I love on this is this one. And it's like, for many people it's like, oh, I do a self portrait of what you see when you are on the couch. And one approach might be like, I just draw like, okay, I see this, here's this piece of art, here's my bookshelf, here's my bike. Then the next layer backs like, oh, I'm going to include my feet in the drawing, like the kind of, I'm on the beach, here's my legs type thing, like taking a picture from right in front of you with a phone, something that we actually haven't afforded to do now, but other people may not have seen in such a direct way, which is why they didn't take selfies. For example, oh, hey, here I am. This drawing takes it back another layer. Now this is actually, where are you really seeing from? Not what is canceled out with a binocular vision, but what happens when you look with one eye really, really far to the side? Well, you see this curving and it's just like a little bit beautifully accentuated, but of course we're seeing that as a two dimensional projection. So that's just so cool. And styles that play with our perception like optical illusions or cubism or abstract art, they often incorporate some invariant features of perspectival imagination, but then violate expectations on other ones. So like half of the person's body is as if they were lit from this side and half is as if they were lit from a different side. That was a classic technique of the cubists. And so very rich space, very open for honest collaboration with artists and philosophers and practitioners. So we hope that this is kind of an opening for those who might have thought active inference. Okay, it's an engineering framework. We're talking about optimization. We're talking about sociocultural things, but kind of from an abstract point of view, how are we gonna take this into our experience of consciousness and of perspective? So I'm gonna return to this slide again and again, which is that it's a question. What does free energy principle and active inference say about the relationship between the world and agents? So that's this feedback between the world and the agents and the nested agents. And here we're adding on this level of the agent is having conscious awareness. It's thinking I am a strange action perception loop, or at least it's acting like it says it or it's thinking that it's saying it. And just as we've talked about before with this slide, we wanna be building on the fields of action, of inference, of physics, of experience. We wanna be tying these threads together while also being aware that just being convinced will never be enough. So that's why it's such a fun area, why it's so great to do research in this area and to learn about it and also to welcome collaboration and to welcome people sharing their perspective. Because I really believe that no matter how many papers you've read about consciousness, it does not make that opinion any more valid. There's other topics where potentially you couldn't say that as flippantly, but in the area of consciousness, it can't be a counting game. It's just not. It's something that is so essential to our experience. And so it's such a deep area, a perfect area for transdisciplinary synthesis. Cool. Let's get to the paper because it's also all about the specifics. We wanna be really specific about what the authors say and do so that we can build off of it with other specific work so that we can actually use their perspectives as catalysts for our own development. Not push ideas out of our head and replace it with the first in update, but rather be catalyzed and spurred to new culturally informed creative states by reading and by discussing. So that's why really, especially for 9.1 and .2 for the last home stretch of 2020 and for 2021, just if you're listening to it, we wanna know your experience and we wanna hear your questions in the YouTube chat or have you on live, however it has to be. This paper is called the Projective Consciousness Model and Phenomenal Selfhood. It was published in December of 2018 in the Frontiers in Psychology Journal in the Division of Theoretical Philosophical Psychology and the authors are Willa Ford, Beniquin, Friston, and Rudroff. So that's kind of giving you some information on when this paper was published and on what areas the authors want to place their work within. Let's take some quotes from the paper and look at the main aims and claims of the work. They write, can we describe this integrated, multimodal, and centered spatial structure of conscious experience in a rigorous way? Great question. I love the setup. And they write, and can we give a good account of why it should be thus organized? Here we answer both questions in the affirmative. Great questions, I love the setup and it's something we can evaluate whether that's a hearty affirmative or a half-hearted or a overly confident or underly confident or okay, it's affirmative, but then what do we do? These are all great questions to have and I just love that the authors can be really clear about this great research and also a great suggestion by a community member to read this paper. Overall, the PCM delivers an account of the phenomenologically available generic structure of consciousness and shows how consciousness allows organisms to integrate multimodal sensory information, memory, and emotion in order to control behavior, enhance resilience, optimize preference satisfaction and minimize predictive error in an efficient manner. Sounds chill. If we could, we should. Great. The way that I would rephrase it without any the jargon of active is how can we apply active inference to the study of consciousness in all of its unified yet coherent, mysterious, dynamic, and profound facets? So there for the purposes of scientific writing, highlighting a lot of the vocabulary, keywords, and research fields related to control theory, information signaling, geometry, math, computation, those are some heavy hitting ideas, but let's also not lose sight that some of the other heavy hitting ideas include the mystery of consciousness and why we're the one that we are and whether cells have consciousness and whether aunt colonies do and whether colonies do, but not nest mates and nest mates, but not colonies or ecosystems, but not colonies. Just let's be open-minded. So let's answer questions in the affirmative and be open-minded. Let's have yes and, and let's accomplish yes and with everyone's participation. The abstract begins with saying, we summarize our recently introduced projective consciousness model and relate it to outstanding conceptual issues in the theory of consciousness. The PCM combines a projective geometrical model of the perspectival phenomenological structure of the field of consciousness with a variational free energy minimization model of active inference. Yielding an account of the cybernetic function of consciousness vis the modulation of the field's cognitive and affective dynamics for the effective control of embodied agents. Long sentence, double points for using affect and effect. Long sentences, but slow it down, unpack it. I think that we've gone through the keywords well enough, at least a first pass to show you why it would be great to connect across all these areas. Abstract part two, the geometrical and active inference components are linked via the concept of projective transformation, which is crucial to understanding how conscious organisms integrate, perception, emotion, memory, reasoning, and perspectival imagination in order to control, behavior, enhanced resilience, and optimize preference satisfaction. So sounds cool. The PCM makes substantive empirical predictions, we'll see about that, and fits well into a neurocomputationalist framework. Of course it does. It also helps us to account for aspects of subjective character that are sometimes ignored or conflated, pre-reflective self-consciousness, the first person point of view, the sense of mindness or ownership and social self-consciousness. And close the abstract with saying, we argue that the PCM, though still in development, offers us the most complete theory today of what Thomas Metzinger has called phenomenal selfhood. Now, I don't know Metzinger's work incredibly well or what this topic bridges out to kind of an interesting way to end the abstract, but so it goes. That's what we might be getting to. How are the authors going to get from all of these keywords like cybernetics and geometry to getting to potentially the most developed framework to date in their opinion of what is called phenomenal selfhood. So phenomenology, experience, selfhood, having a self. Well, they start in section one with the introduction and go pretty directly to section two, methodology. I think this is great because it really gets into the details quickly. You could philosophize for a thousand books people have. So what's the methodology? How can I identify what you're actually doing in this paper as a researcher without hearing 30 pages about your understanding of what Descartes did or didn't say? Three, they're going to talk about phenomenological invariance and the functional features of consciousness which are also, we're going to go into them later. And they talk about the phenomenal logical invariance, what they are, the five of them, we'll go through them and then talk about how they're related to projective geometry in the PCM. So basically they're going to start with their invariance and if you have a qualm with an invariant, you want a sixth one, you want to collapse to four. Great, let's write another paper but they're going to start with these invariant features and then they're going to introduce that plus projective geometry to say projective geometry is the bridge we're going to take to get these five invariant features. We see figure one which has an account in the PCM model of psychological phenomena and figure two which shows projective geometry in relationship to the necker cube which is like kind of a cool illusion that I'm looking forward to showing you. Then they go to the five functional features of consciousness which are also what we're going to address. And they talk about the projective consciousness model which is remember the phenomenological invariance plus projective geometry equals part of the PCM. Now we're going to talk about the PCM and the functional features of consciousness, the functional cybernetic functions that they talked about earlier and fuse that in the capacity of using active inference and free energy minimization to get some of these functional dynamics. Then they'll show figure three, a simplified 2D projective consciousness model based simulation of a pit challenge. And then figure four give a overall sketch of the projective consciousness model. In section four, they talk about the hard problem of consciousness, representationalism and phenomenal selfhood, the hard problem and computational functionalism is variously phrased by Professor Chalmers and others as basically being about what the substrate of consciousness is. Why is it that when you hit your hand, it hurts, but when you hit some brain regions, it doesn't hurt and when you hit some other brain regions, you just pass out, you come back or you die or something in between. How is that happening in the brain? Or is it an antenna? Is it a generator? These are the questions. Then they go from the computational functionalism to talking about the PCM and representationalism. Now, all these terms, computation, function and representation are pretty loaded. We've gone into them on a couple of other discussions but be open-minded as to how they could recombine in the context of PCM. And then they're gonna take those kind of more quantitative terms, computation, function, representation, representation is a little bit of a bridge and they're gonna bring that into the project of consciousness models, phenomenological components. So that's gonna be about subjective character, subjective character being what it is-ness, the what is it about to be a bat and also the phenomenal selfhood of the philosopher that they concluded with discussing in their abstract. And then they conclude. So, looks like a great roadmap. Let's, in this introductory video in the time that I've left here, let's go through the functional features and the invariances of consciousness because we're not gonna have time to go through them in 9.1 or 9.2 with all of you contributing your perspectives and it's good to get them out here because these are kind of like the big five for the functions and the big five for the invariances. And if you have a qualm with the model, you should first ask, is there a function or is there an invariant that I think is extraneous? Is there one that I think is missing? Is there one that I think is just off base? So if you have a question about one of these five things, you think this doesn't go far enough or this goes too far or, but some conscious systems don't have that. Great, that's the question you wanna ask. So there's five in each category. If you think there should be more or less or a different one, that's one entry point. Now, if you agree with all of them and you say, yep, those are the perfect five in each category, then you can evaluate given what projective geometry is and what the PCM is, do they address the functional and the experiential invariance of consciousness? So that's how we're gonna read this paper. And that's where I hope there's a lot of footholds for people who maybe have thought about consciousness, but this is a new way to hear about it for them. It would be great to have their comments. What are the five functional features of consciousness according to the authors? And the way that they sort of justify this is by saying, we claim that the overall function of consciousness is to address a general cybernetic problem or problem of control, control theory. Consciousness enables a situated organism with multiple sensory channels to navigate its environment and satisfy its biological and derived needs efficiently. This entails minimizing predictively erroneous representations of the world and maximizing preference satisfaction. A good model of consciousness must be able to explain how this is accomplished. The PCM thus emphasizes the following interrelated functional features of consciousness. Here's why I pause before putting it up, but after justifying it, because even things that do control theory might not have consciousness. Full stop, that's the whole thing that I was getting into with the ant colony test is okay. So then if I have a robot and it has something that allows it to do multimodal integration across sensors with a generative model with a deep with a counterfactual, whatever it is, does it have consciousness? And then that's where the person goes, oh, well, it needs a brain. And so then they go down the rabbit hole a different way. So the function is important, but again, it's not the whole story, especially yet. Here are what they characterize as the five functional cybernetic features of consciousness that we'd want to at least explain. So maybe just passing these five is necessary, but it's not sufficient, but it's important. One, global optimization and resilience. Consciousness helps the organism reliably find the globally optimal solutions available to it given competing preferences, knowledge-based capacities and context, making the organism more resilient. This is a mechanism of global or subjective here in the sense of subjective Bayesian optimization, which does not always uptail objective optimality, though it must approximate the latter reliably enough or you die. You don't objectively enough figure out how to walk down a staircase. You're not gonna walk down many staircases. Now, a few things on this. First, many people think consciousness is epiphenomenal. So whatever's happening in our awareness is not going back to guide. So this plays a little fast and loose because it's saying, well, consciousness helps your organism reliably find. It sounds like consciousness is coming back. And this is the whole Cartesian dualism paradigm is like, I know that Carl Friston and others have many more nuanced works, including I think therefore I am related works. So I'm not gonna go into it, but it's like, if consciousness is something that reaches back into the physical, how is that? Or if consciousness is purely a descriptive epiphenomenal, above the phenomena of mechanistically or materially defined components like neurons, message passing, whatever it has to be, ants, no matter how emergent their properties are in the weak emergent sense, do they ever amount to strong emergence and actually cause the system? Even the downward causation and causal entropy and the renormalization group, that still is a very philosophically agnostic with respect to whether it's causal. So another thing to say here is that global optimization, given the preferences, knowledge base, capacities, affordances and context, so it's not global optimization. So it's local optimization or it's global exploration of a conditional space, conditional upon the things that we want it to be conditional on, like how far you can reach. But then, I don't know about you all but my consciousness doesn't always help me get to the globally optimal points unless that global optimum is defined as this not objectively best space of like, well, you just didn't go the right way but it seemed optimal given what you knew. It's like, okay, and then, okay, and it has to be reliable enough. So I'm on board. Two, global availability. The contents of consciousness must be generally globally available in the sense of poised to be treated of by our effective cognitive and perceptual systems as the occasion requires. So for example, here's idea one, idea two, memory four, the color, the number, all these things are just apparently one layer away from accessing the workspace. This is drawing heavily on the global workspace theory, attention schema, modeling of some other researchers. I would just point out global doesn't make sense to me. The brain isn't a globe, cognition isn't a globe. Availability, it's like, yes, but then again, you define available as like, I knew it yesterday but I don't know it now so it's not available or what if it's part of my extended cognitive process and it just takes me five minutes to find it. So, okay, relevance but globally available and also our awareness is hardly the tip of the iceberg or maybe it's a whole iceberg and a half. So I wouldn't even know what it would mean to be globally available. Motivation of action and modulation of attention. Attention schema theory. Consciousness is generally implicated in the motivation of action based upon affective dynamics and memory. This drives the modulation of attention though consciousness is not to be narrowly identified with attention. So that's kind of attention schema theory breaking upwards out of the just like LSTM like a neural network that has an attentional parameter. We wanna go into the experiential dimension not just the, well, it's a laser that spends a lot of its time here or it's a camera that autofocus is on this level. We wanna go beyond that and say whether or not it is getting input like a camera in some of its sensory organs the experience is somehow generated but I'd also point to this motivation of action and modulation of attention. It's a very, very important that these are phrased as if consciousness functionally functions of consciousness, right? Motivates action and modulates attention or it could be epiphenomenal consciousness that is and so it could be the experience of motivation of action and the experience of modulated attention. So in that framing, we made consciousness epiphenomenal and we basically said, yeah, attention is something that happens with all the neurons and the firing patterns and whatnot and you know, the drugs, everything that happens and then it gets to some synergetic compromise where it's like looking at this thing at this time and then you're just experiencing it. It's not simply a Cartesian theater model it's still a bottom up top down situation but there's no simple outs here. It's neither that consciousness is reaching back and motivating action. Neither is it that it's modulating attention through what dials, you know, show them to me. Oh, well, that's what meditation is. Yeah, but then all of a sudden you're getting into this area where I think a lot of stuff that's not related is pretty important. And then the fourth one, oh, whoops, I said big five. Well, let's just think of it as the big five. The big fifth function is gonna be the secret function but the fourth function is simulation enhancement. Maybe that gets us too. Consciousness facilitates the use of simulation in the broad sense of imagination, not the narrow sense like simulation theory but maybe that's a direction to go. In the service of solving cognitive perceptual and affective problems and more generally in the anticipation and programming of actions and the response to their outcomes. So simulation enhancement could be for example, I wake up and it's very dark in my room but I have a generative model or an accessible model not being too formal here of my room. So even with my eyes closed or with very dim light I can enhance my simulation and think, oh wait, if I open the door this way it's gonna wake up my roommate or if I do this at this time then down the road this is gonna be a different sensory input that I don't wanna experience. So that kind of rich mechanistic understanding of the world is what humans excel at and it's what the current machines that we deal with do not deal with well. You can ask it a question like could this building wear ice skates and it might be able to parse whether those terms mean different things and whether they're keywords used by the same people or not on Twitter but will it be able to understand and address that question? And that's kind of what the GEB book really is fun to think about. So again, four functional features for consciousness. Should there be a fifth one? Are one of these extraneous? Do you have a different function for consciousness? What is your understanding? Is consciousness have function or is consciousness something else? How would it affect things if it had a function? How would we measure the function? What is dysfunction? All these questions, any questions that people wanna have, maybe be cool for people who watch it like to post questions they have and we can have a norm of just asking the questions that we have out loud because let's ask the questions we have. I don't know any other way to say it right now. The second thing that we'll talk about is the invariance of consciousness, the phenomenological invariance and here I believe there are five. So these are the things that are really what their model is gonna come down to with respect to phenomenology and philosophy. So this is what they're gonna focus on. This is what the mathematical model needs to aim for. This is what their formalisms should explain, predict, integrate, encompass. So this is like the big core philosophy idea that we want to do all of the math and all of the computational stuff to get at with some type of bridge, the bridge is projective geometry, perhaps. So what does it mean to be a phenomenological invariance? And they write the following are the deeply interconnected invariance that we think characterize consciousness. We take them as postulates for mathematical treatment, citation that you could follow up on and construct the PCM on their basis. So it's kind of like axioms, they're taking these as like root statements that kind of don't need to be justified. They're kind of like observations but of course this is something that we all wanna know is could there be another one? Could one of these be redundant? What if you had four out of five? Do you get 80% on consciousness or what if we figure out or imagine a system that has a three out of five? Is it, you know, what does that mean for that system? What are these phenomenological invariance? One, relational, phenomenal intentionality. So break it down by the words. All consciousness involves the appearance of a world of objects, properties, et cetera in various qualitative or representational worlds, representational ways to an organism. So again, they're not thinking too much about worms and fruit flies and ants and ant colonies. Maybe they are. I'd love to hear it from an author or from a colleague if they are but these are the debates that we're having which is like, right, if you observe the worm moving away from the gradient as if it had a representational, relational, phenomenal intentionality, is that evidence enough? And that's the question. Two, situated three dimensional spatiality. Of course, also there's time sometimes called the fourth dimension. The space of the presented world like objects or et cetera is three dimensional. Again, big asterisk and perspectival, unfolding in an oriented manner between a point of view and a horizon at infinity where all parallel lines converge. The origin of this point of view is elusive though normally it seems to be located in the head. The space is normally organized around the lived body. So this is like bringing us to embodiment, to inactivism, to ecological psychology, to intercultural and developmental psychology, to Ken Wilber's integral theory, also connecting us to lived experience and to spatiality and to the ways that the spaces that we live in influence who and how we are. So all of the awesome active stream participants who have taught us so much about embedded systems and embodiment, we'd really love to hear what is work in this area that impinges on one of these invariance. Multimodal, synchronic integration. Synchronic means at the same time. Consciousness involves the synthesis or integration into a unified whole of a multiplicity of qualitative and representational components from sensory modalities, memory and cognition. So diachronic means at a different time. Synchronic at the same time. So memory is diachronic, but it's of sensory modalities. So it could be a hearing event from six months ago. Maybe, and there's the whole question of imagination, obviously a big question, but just to a first pass, events from the past that influence us in the future are diachronic. Or another way to think about it is they modify the system or they put it onto a different trajectory that's different than if that difference-making cause to use the water's terminology hadn't occurred. Consciousness seems to be both diachronic and synchronic. Like when we're listening to music and seeing flashing lights, there's like synchronization in the same moment that's also multi-sensory. But there's some really interesting work about the time durations of different kinds of experience. So clearly something is happening there with sight and sound. And just people who are born with or without hearing and born with or without sight, all these different people can navigate in the same world. So that says something about sight. You know, it's not just us watching a projector if there's people who can navigate without sight. So that's kind of what they wanna bring it back to. Four, temporal integration, which I just hinted at before, but this is a little deeper level. Consciousness involves at least the retention of immediately-past experiences and pretension of immediately-future experiences as integral elements of its specious present, the ever-present now, as it were, and a foundation for more expansive forms of temporal integration, such as distant memories and long-term planning. Distant past, distant future. Now, just to give one kind of funny note, when I was reading this, I thought what is pretension? What does it mean in this context? So according to Google, pretension is a philosophy word that means anticipation of a future event. So I dug a little deeper into the Merriam-Webster definition of pro-tent, P-R-O-T-E-N-T, pro-tent, like a professionally set up tent. It's an archaic transitive verb meaning to stretch forth or to extend. Now, nowadays one might hear about portending future events, not protenting them. And the words actually have different meanings. So to portend, P-O-R-T-E-N-D, portend, is to convey something through like a symbol or an omen, like the owl portended difficult times or something like that. But portenting is something that the agent does in response to the symbol. So upon seeing the owl, the farmer portented that this was going to occur. And so I would just kind of bring up this idea that maybe they're so similar sounding and they're so kind of close in their word usage that the English language just couldn't tolerate that much ambiguity because portenting and portending, they sound like 99% identical. And so then maybe portending took popularity over pro-tent because it removes agency from the actor. And so it's one way to remove action from language. And so now, protenting or portenting, it's now a philosophy word. It's archaic. It's an old word because we can't have action and speech, right? No, but non-ironically, it is really interesting to think about how these words change in meaning and the more action we can embed in our speech patterns and our interaction patterns, the better. Subjective character. So subjective characters defined by the authors as involving a pre-reflective, non-conceptual awareness of oneself and their individuality. Now, one could be totally off base so they could be wrong. Is that part of consciousness to be wrong about your individuality or is it just that you're having an individuality out? Hard to say, but the authors have it as an invariant. They write also that two to five are in fact implicated in one. So relational, phenomenal, intentionality is really the most important key invariant. So that's the relational thinking and that's why I feel like the active inference and free energy principle communities are very adjacent to complexity thinking, to systems thinking, relational thinking, design thinking, as well as team collaboration frameworks because so many of these social work and team relation frameworks as well as other types of thinking including complexity, they all deal with this relational insight. So there's so much to learn and to do here and to understand about what are the implications of relational thinking on consciousness, on our own experience, if not on actually making claims about consciousness. Cool. Let's just kind of close out by going through and looking briefly at a few of the figures and just what they're about. Take a 30,000 foot view so that when you're reading through the paper, you kind of know where you're coming from. So figure one has A, B and C and A is a projective solution to a VR mediated disassociation in phenomenal selfhood. So this is kind of like from the experience of the Euclidean view, like we're looking at Minecraft here. And then the person, their head is in this like block in the lived space in the right side of figure A and but they're imagining the space as if it were on the left side of A. So I can imagine my office right now in terms of the right angles. Again, even when I look out, I don't see right angles. If I were to combine like this and I look back down and I look at the angle that the room was just making, it's a not 90 degree angle, but it looks normal within my generative and usually verified experience that it is 90 degrees because why projective geometry? So what do we have to do? Well, it turns out to add in this element of the visual field in the projective geometry in B, we're going to show this like sort of, they call it an anti-space beyond the plane. I'm not even going to go into what that may or may not be because I don't want to go down too deep, but it's kind of like there's some mapping reflected in B and C, what they call God's eye vantage point. There's certain mappings from B and C just like there's a mapping between the understanding that I'm in a Euclidean office and then my lived experience where apparently wherever my eyes are or something like that, right? A little bit behind there, a little in front of there. How are we going to square the circle almost literally with the generative model of the Euclidean office with my lived experience of the wacky angles? And what's interesting here is the visual focus. And so where are all of these different centers? Is this kind of related to like, what if we feel like our emotional center is somewhere else in our body or what if we have the experience of not just a simple out of body like floating above your body looking down but more expansive experiences. So we'd love to go into that phenomenology with someone potentially from alias or any other colleague. And so there's certain mappings just like Euclidean to non-Euclidean office. There's this like God's eye mapping. You can evaluate for yourself whether you think that's God's eye. Figure two is a projective solution to the neck or cube. And so here's what's happening in this image. On the left side is like two versions of the so-called neck or cube illusion. So it's a cube visual illusion. And it turns out it's illusion. First off, just looking at these two for one or for both, just ask yourself, do I see them as, how should I even say this? The biggest side that's close to you, is it on the top left of each square or is it on the bottom right? And you can flip sometimes. Some people, different people in different ways can flip in their mind, seeing it as if the biggest side of the box were coming down to the right or whether the biggest part of the box is kind of going up into the left, okay? Now, here is why they are suggesting that this visual illusion exists. Because there's no perspectival cues, we're looking into a projection on a flat plane. And so there's sort of like two optimal solutions to the perspectival question. And so because there's two such similar evidence competing models, free energy minimizing models, it's not, they say it's undecidable free energy based inference, which is to say that in our optimization perception, we continue just oscillating. It's like, you know, it's the old woman, it's the young woman, it's the old woman, it's the young woman, okay? That's a visual illusion. But this is one that has to do with projective geometry, where it's like, it's coming towards me, but I'm getting sensory evidence that's going that way in a different simulated frame. And then you perceive it that way, and now it's top down. So it's like this back and forth resonance and there's a lot of interesting research on this. What breaks the symmetry quite literally? And the answer is lived experience from the point of view simulation. So here, if I'm closer to one side of the square, I don't have any little squares to pick up, but it resolves it, not absolutely. You still know that square has a different side or the cube has a different side and you can even imagine what it might look like. Like, oh, I'm seeing the six on the die, it's closest to me. If I were on the other side, I would see a one and it would look bigger and I wouldn't be able to see the six. But where I'm at, I just see a big rectangle six or something like that. And so that kind of captures a lot of these aspects of the geometry of consciousness where you can simulate being in different places, but you're always in a certain spot visualizing it. What is that spot? That's the privilege point of view, that's the point of reference. What about it is special? Well, it's like a non-arbitrary point in a projective geometry. What kind of projective geometry? Well, something that takes our allegedly Euclidean world and maps it into this like vanishing point experience. I think I got some of the main threads to link there, but that's kind of what they're suggesting. And free energy is gonna be the optimization and action and inference paradigm within a simulation broadly construed understanding about how symmetry is broken in terms of experience. So big stuff, interesting topics, to be wanting to address this is big by the authors. Figure three is a PCM based simulation of a pit challenge. And on the very bottom, you'll see this is like from another paper. So if you're curious about this simulation, definitely go check out the other paper that they reference. But what we can look at here is starting on the left side. So there's like a bunch of barbell shapes. And basically the barbell has to do with, it's like a map they can walk around in. So they can wander around, but they kind of have to make it across this pit to get to like a rewarding place. So there's an anhedonic in a safe space, a challenge room and a goal room, I believe is what they call them. And then they're going to be thinking about that moving circle, like let's see if I can do this, there you go. This moving circle is like the agent moving through the map. Here's its expectations and where its prior beliefs are. And then here is it resolving its uncertainty in a, it's kind of like looks like a radar dial, but that white spot in the middle is like the non-arbitrary origin. That's like the consciousness zone. And so from the POV point of view of this organism, in its FOC, its field of consciousness, which is like some geometry mapping from the experienced space into its generative model, it's navigating that mapping somehow through consciousness. And then that's also able to bear on the difference between like having a weaker imagination and a stronger imagination. They have this active inference engine with multi-sensory integration and this looks like something definitely to go into for the 9.1 and for our own future research. But basically we can run model scenarios and we can say, yeah, I guess if it, it's a funny thing to say, I can imagine that if you were to do this experiment, you might find informative results suggesting that enabling an agent to have imagination in an action-oriented geometry might enable it to do better than a model that doesn't. Therefore, spending most of its time learning about relationships that may or may not be relevant in the geometrical frame that sensory data actually occur in. Again, long sentences abound, but I think that there's a lot there in Figure 3 and also a lot to learn about how they're framing free energy. All right, last figure, Figure 4. Figure 4 is an overall sketch of the PCM, the Projective Consciousness Model, and I think there's a few different dimensions here. So one is the observer at the center with their lived space, Stephen. This really reminds me of the peripersonal space, so I hope that you'll have a lot to teach us in this domain. And within this domain, there's field of consciousness, field of affordances. Let's think about it from many different perspectives just by saying one thing I'm not always meaning to limit. I'm just saying that this is one way we can think about it. Let's imagine that there are perceived and imagined events. Now, observations are just models of observations, so maybe there's less of a distinction there than might be seen, but in this geometry, observer-centric, with all of this Projective Geometry mapping happening, can we think about aligning your vector through time down a gradient that goes from red to green? So it's funny because it's a gradient that colorblind people of certain types may not be able to perceive, which just so perfectly entails everything we're talking about here, which is that action in some absolutist sense is moving downhill thermodynamically. There's very few people who would say that action moves uphill thermodynamically. Bring them on the stream. But it has to go uphill conceptually or symbolically in certain cases, if not just simply logistically in terms of going uphill to go downhill later. And so here we see a gradient reflecting the free energy going from a culturally-based green is okay and low surprise, and then red is high surprise, high alert, high attention. Where have we seen that before? Taking that kind of a culturally imbued color scale and then aligning a time vector as a dimensional projection onto that figure, as a way to say that actually through space and time in the context of the Projective Geometry and Affordance Capacities of the organism, it is still downhill in a sense. So I don't know where this necessarily fits in with all the mathematical formalisms. Here it looks pretty nice to talk about the action, perception, prediction, imagination loop. This is another loop we might wanna learn about. We've talked a lot about loops related to agent and environment. But now we're almost thinking the agent within the agent is going from action, perception, prediction, imagination. So that's something happening inside of us potentially. And then again, on the right side is looking at time series. That's the directions we wanna take. All free energy research is like through time, through space, any other dimensions we come across. We'll go through those two, but that's definitely the cool dimension to think about is deep causal inference with the sound and what's visualized in the figure. So other people may see very different things in this figure and we welcome hearing about it. Let's close with some just final thoughts and questions to end on, which maybe we'll start ending with questions more. They write in their paper. Computationalist A postiori, that means after the fact. Identity theory allows that apparently contingent processing constraints may be essential to real consciousness. As long as the computational model captures the phenomenological invariance, meets the functional constraints and survives the tribunal of prediction and experiment, we should happily accept that some apparently contingent and only empirically knowable constraints pertaining to the physical realization of consciousness are actually essential to it. So that is basically like saying, look, if we do the fill out, if we can explain and predict the invariance, the five invariance, and we can capture these functional attributes, there's four of them, but again, both of those sets, the invariances and the functional attributes, anyone is game to add, remove, or change. It just needs to be justified and understood. So they're saying, look, if it does both of those things, which is why I spent so much time on it in this talk, was because that's their claim. It's not just broadly, we could do active inference and think about consciousness. It's saying if we make these invariances part of our model and we at least get all the functional attributes that we want, then any model that is surviving unique explanation and predictions, the tribunal, the troika as it were, if it does all of that, then it is basically gonna come along with some wacky baggage, like a parameter we don't understand or the God's eye view. And then they're just saying, you're just gonna have to accept that as contingent. So fun thought by the authors, just a really well written and conveyed paper. And I hope that it broaches onto many areas that we've talked about in other active streams. Here's my closing set of questions. One, how would we recognize a framework for consciousness? And a related question is, what might a framework for consciousness be? What if it's a poem? Oh, well, it can't be a poem. How do we know? Or what if it's a drawing? Or what if it's a set of ontologies, narratives, formal documents and tools? Could it be that, ONFT? Could it be bolts? Business, operational, legal, technical, social? Could it be acceptance? Could it be fundamental agnosticism? Could it be free energy principle? How will we know? And that is really the key question for theories of consciousness is not, do you feel right? It's how will we know and do you hold yourself to that standard? Because we can do a lot better in this area and this paper is an example of great work in the space. The next question is, what might a framework for consciousness enable? And that's pretty broad. What would it change in our action practice? Would we put more people in fMRI scanners or less? Would we communicate with people who we don't think of as being able to communicate with at this point or not? Why not? How might it change our legal systems or our social systems? What if we could measure different kinds or types or levels of consciousness? And then that's really why we need to get down to the brass tacks on what it might look like, what it measures and what the predictions are. Because if we don't know the unique predictions or experiments, and this is alluding to earlier when I said that there's a lot on the table, someone's gonna say, well, I have a consciousnessometer and this is a two and this is an 11 and this type of object that you care so much about is actually, it's not conscious so I'm just gonna unplug it. But this other one that I care a lot about I measured it with my device and it was very conscious, it's very worthwhile to save. And where will people make a stand? And the stand has to be on the common ground and the space of recognition of frameworks for consciousness, in my opinion. I'm open to changing my mind on that but I think if we don't meet at how we'll recognize such a framework, what it might look like and start with respecting world knowledge traditions and by respecting differences in our opinion and we stay in a hegemonic mode, in a absolutist mode, in a non-pluralist frame, culturally, methodologically, if we don't go down the pluralistic route it's gonna be, I think, very bad for practical implications. And then lastly, what are the next steps for active inference in this domain? What do we want an active inference conscious agent to do? Or how can we build on active inference now? Do we want it to go deeper into consciousness, deeper into action? We spoke about action policy selection and machine learning. Could we put consciousness and machine learning? What would be the philosophy and the implications of that? These are all big questions but how are we going to take this paper and develop in the directions of active inference? Even if only making educational content? And then lastly, maybe always a good question to address is what is the goal of this research? Sure, we're curious about consciousness but aren't we curious about so many things? What makes it worthwhile to do this research? Why are we doing it? How have we selected our how? Why have we selected our how? How have we selected our why? Every one of these combinations we really need to keep in mind especially if it's about us, about our experience, about consciousness, about people. It's just, again, can't be said enough that there's too much on the table to not have this be participatory, inclusive, cross-cutting, intercultural, the list just goes on. So everybody who's listening and thinking about it, thanks so much. This kind of, I guess went a little bit longer than one might have predicted but say la vie, right? When active streaming, why not? Thanks for participating. We will provide follow-up forms to the live participants. We actually will, believe it or not. We will request your feedback, suggestions and questions. So if you have any thoughts, again, I tried to drop a bunch of times, really specific moments where somebody could just pause the video and say, yes, this thing right here, I do or don't agree with or this does or doesn't resonate with my experience, that would be epic because let's hear from a lot of perspectives on consciousness. It's important to hear about that. And then also just any feedbacks or suggestions or collaboration offers for the act in stream. Always welcome. You can stay in communication with us, any of our channels or means. But thank you all for listening. Those of you who were listening live or those of you who will listen in replay because the future is always now, isn't it? See you later.