 Hello and welcome to the Active Inference Lab. This is Active Inference guest stream number 3.1 and we're really excited to be here today with Sarah Hashkes. We are gonna be talking about, as the title slide suggests, Active Inference VR and Psychedelics. So Sarah, I'll leave it to you to just introduce the topic yourself and the ideas however you want. And it sounds like we're gonna be having first section of the presentation, then a question and answer, and then a second section of the presentation, and then we're gonna have a second overall discussion. So with that kind of structure in mind, just introduce yourself and please take it away. Hi, everyone. So yeah, we're gonna be talking first about my research in VR and Active Inference and then my research in Psychedelics and how it relates to a few issues about Active Inference and we'll have a short break in the middle of our questions. So yeah, my name is Sarah Hashkes. I have a master's degree in cognitive neuroscience. I opened the first VR lab in the Moto Control Department at RodVoud where I was researching VR's effects on our brain-body connection within the predictive coding framework. I am now a CEO of Radix Motion, we're a startup that combines neuroscience and immersive technology and we're focusing on empowering the psychedelic ecosystem with embodied tech. So yeah, let's get right to it. A lot of this is basically all of this lectures based on my research from academia was pretty nice to go back and reread all the things from 2017. But before I actually get into it, I'd like to activate our Active Inference and a very short interactive experience. So wherever you are, especially if you've been sitting a lot, I actually suggest you stand up and hopefully you can see my hand on the screen and I'm gonna ask you to put your hand in front of my screen and just follow it for a few minutes, no, moments. How about do it with your mouse? Move it with your mouse? Yeah, okay, well, okay, we can do that too. If you can see, oh, I see, I'm not seeing my face in this. Okay, yeah, so follow the mouse for a few seconds and see if you can track this. So what just happened here? So, okay, I gave you auditory instruction. That hit your brain and created a goal, a top-down prediction that you were supposed to, well, you didn't see my hand, you saw my mouse, but basically that prediction was that the position of your hand to the mouse should be the same position, the difference should be zero. Now, your eyes were then giving you bottom-up sensory input that were telling you, oh, there's this difference between the mouse and the hand and due to active inference, wanting to minimize this prediction error, your hand was moved towards the mouse. So this is a very simple example. Of active inference and this follow me instruction and activity is really quite basic for how we learn motor skills as children. And you can see this in the way kids play games, if it's Simon Says or Patty Cake, in the way children look to their adult figures to actually copy them and learn movement patterns. And this follow me example was part of the research I was doing. So we'll get into that. So what exactly was I trying to do? Well, I was really hoping that if I could receive visual cues from inside of a body of a professional mover, I would be able to activate this active inference ability of my brain and actually improve my movements. I have a background in martial arts and dance, but I'm actually from childhood, a very clumsy human that took me really, really, I guess, a lot longer to learn motor skills. And I was looking for a way to hack this. And I kept dreaming, oh, if I could just be inside the body of my martial arts teacher, would I be able to punch like him? Would my body learn to adapt and move faster and more accurately? Because I would be receiving this prediction that I am actually him. So this is sort of where I started exploring how I could research this. Now, why would I even think it's possible using VR? So part of the things about VR is that it doesn't just activate your visual sensation and sensory input. It's actually a multimodal integration between your visual perception and your proprioceptive auditory and even haptic if you're currently using haptic controllers. In my experiment, I just was like, we were using the SDK one of Oculus, no haptic controllers. I was connecting them to lab equipment to get positions of the hands. But we were still connecting visual and proprioceptive sensations. Meaning basically when you move your neck around the world is still moving the way you're expecting it to move. So you're getting the same upper body and neck activations of muscles of where you are in space along with your visual perception. And that activates these higher brain areas and really gives you what we call the sense of immersiveness, right? Immersion, you're in that place, your brain believes it's real even though parts of you know that it's not. Now VR really gives us a very unique ability to influence the brain because we can start controlling the relationship between the different senses in ways that either fit these previous top-down biases or don't. So we can, this way really modulate the amount of prediction error that we're receiving and activating in the person's brain. And there's really a lot of experiments out there that have been done, for instance, with anorexia patients and people with pain issues that really prove that we can give a new sense of self to a person. So when I talk about a sense of self what am I talking about from the predictive coding perspective? It's a really, really great paper by Absentus Curius that really go about explaining this concept of a minimal self, being the top-down prediction that just explains the sensory correlations between the different senses. And a great example is the rubber hand experiment. So if you don't know this a hand is, your hand is actually hidden from you like in the photo but it's receiving sensory input at the same frequency that you're seeing a rubber hand being stimulated. Now researchers are a bit cruel with this experiment. They at some point take a hammer and smash the rubber hand and you can really see and register and record the physiological measurements from the person and how that person gets stressed. Now newer research is actually showing that this isn't the same for everybody. There's some people that seem to be more susceptible to this versus others that don't. There's some amount of suggestibility and I guess maybe strength of what is your own self versus how mutable this top-down prediction is in society might be actually quite different. And we'll talk about that later. But generally there is a very strong ability for us to change our sense of minimal self. And we do this all the time. When we wear clothing, our predictive abilities become able to just predict the way the sensation unless you're somewhere on the autistic spectrum and then every little itch and sock can drive you crazy. But also using tools, these abilities really extend what we call our minimal self. So I was hoping that with VR, I'd be able to transfer myself from a clumsy person into somebody that can do movements that are much, much more difficult and a much faster time. So what did I actually do? I needed to find a task that was known to be a hard task that I could measure, right? What is the difference here? So what was a task that's called bimanual interference? So bimanual interference is basically our inability to create different movements with different hands and get an actual interference pattern between them. A very simple game, which is what I coded in my experiment is try to make a circle in one hand and a square with the other, right? And just try for a second. And what basically happened is that you go between phases of making circular movement and phases of making square movement. So I set out to look at this task and see can I improve this task and people with various VR interventions. So, oops, I don't know why that's a new finding here, but okay. So let's look at a little bit at the experimental design exactly of all the different things that I was doing. And I'm not gonna go into exactly all the analysis. You can read the paper because I wanna talk more about the actual discussions and what are the ideas that come from this that we can implement in our daily lives. But there were basically a few different conditions. One condition was a follow me condition similar to what we did at the beginning of this lecture where you had an avatar that was able to do these movements and you were instructed to follow the avatar. The other instruction was complete the movements yourself. So there's a self experiment and there's a follow experiment. Now, on the follow experiment, we were actually discovered that we needed to hide the square in circle because the minute people were already predicting that we're doing a square in circle, they were not following as much. So when you were following, you were doing random follow movements along with periodic small parts of circles and squares which I then analyzed specifically those parts. So that was one, just like I'm gonna say sort of baseline condition to see what is the difference between following someone versus doing these movements yourself. And the second line that you can see on this design experiment was a bit of making things stranger with what we can do with VR. So the second condition, when you moved your right hand, you actually saw your left hand move. Now there's no way I can recreate this for you without putting you inside of VR. It's a very trippy experience. Everybody's like, wait, what? You're like moving your right hand but your left hand is moving and that really like messes up your predictions. Now the third experiment, I put you inside of the avatar that was able to do these movements and then told you follow the movements or just become this avatar and follow the movements from within this avatar. So we actually manipulated two things. One is the high level task goal, right? Follow versus move yourself. Now we assume that the stealth movement has a very strong hyperpryor of allowing only one task. This is something that we're sort of meant to be doing naturalistically one task at every given time. We also have this base on ganglia that's really blocking more than one activation of a physiological motor activity. So that was one thing we manipulated, follow versus, but just do yourself. And the second manipulation was of the visual feedback, right? Do you see your own hands moving? Do you see your weird cross hands moving? Or do you see an avatar's hands moving from the perspective of your own hands? Now there's again, the difference here when I crossed your hands we're really giving you lots of prediction error. This is prediction error that doesn't fit the normal minimal self activation and predictions that you have versus when I'm putting you inside of an avatar that's moving in the same way that you are trying to move I'm actually not doing something that's against this minimal self prediction. And we'll see how these things gave different results. So let's start with the first thing that I was most hoping to get. Oh, I'll just plop myself into an avatar that can do these movements and I'll be able to do them. Well, that didn't work. In fact, we got the exact opposite. So when I say this increased the interference I mean that the people were less able to perform this difficult task. They were having even more periods of both hands doing the same thing either circle or square and not able to differentiate each of them. So what actually likely happened here, right? And this is sort of a theory because I was not looking at neural correlates I was just looking at behavioral data but there is a general consensus in literature that the visual modality at least for most of us as much stronger than proprioception data. So it seems that we were getting this mismatch between the sensory input, right? Of I'm seeing my hands doing the thing but I'm feeling my hand in a different place, right? Now I was hoping that we that would cause active interference pulling your hand to where they're supposed to be but it seems that the actual actually the opposite happened and cause when I asked people they were all actually believing that they were doing the task better. Some of them actually thought they were the avatar they didn't even realize that this was a pre-programmed avatar they thought it was them and all of them reported it to be easier. So, but when I looked at the data all of them were actually doing much worse. So it seemed that the visual modality cause the proprioception modality to actually lower and become less accurate and your brain decided when it has this higher prediction or there's a few tactics that can do one is active inference the other is modulate one of the sense modalities and it seems that that's what it did to slower the proprioception stuck with a sensation from the eye which it could predict and that's the way it sort of minimized the prediction error. So this, we can talk a bit more in the discussion about the meaning of this and how what it means for training but this was again, it was against what I was hoping and against my initial prediction but thinking about this a bit deeper it doesn't make sense because accurate feedback of what you're actually doing seems to be incredibly, incredibly important for learning any type of skill. So giving this false feedback of somebody doing the movements seems to not be the way to go around actually teaching movements an experiment I would love to conduct in the future as having both of these as an overlay like have the avatar that's able to do the movements but they'll give you a physical ability to actually know where your hands are and this is something that if people have VR they can actually download my VR app that's called MIU which lets you play with these things yourself. It's basically records 3D movements and then you can move around them and also get haptic feedback and see where your hands are inside. So I played around with that a bit with like professional jugglers creating movements and then me stepping inside and needing to follow their hands but also receiving the feedback of where my hands are through haptic feedback and visual perception but I did not do an academic research on that yet and that I think could be really interesting because I think there might be something there because obviously the prediction error was increased meaning your brain needs to deal with this prediction error in some ways. So there is a potential of still creating better updates and maybe activating this active inference ability but we need to not only trust the visual perception visual versus proprioception the visual just wins and you don't feel where you are anymore at that same level of accuracy. So that was interesting. Now let's look at the other two conditions of what I found. So in the other two conditions I found that when I actually gave people the totally incorrect things that were against their minimal self I crossed their hands they were moving their right hand seeing the left hand something that they've never ever experienced before they were able to follow the avatar better but the self movements were worse. And that actually does make sense because again we're interfering with this self level prediction and having to maintain this goal of doing a self movement while your whole minimal self is just like totally wonky and doesn't know what's going on and has a much higher prediction error is causing interference and making you not able to do this task as well but a follow task you don't actually need the goal is really different you don't actually need to have a goal high level goal oriented self structure that's creating these predictions you just need to really minimize this location and space versus the other person they were actually doing better and we'll talk a little bit about why that might have happened. Yeah, so just one more finding before we go into a bit more of the discussion is that there was actually two very different strategies might have been employed here as we see between the two different tasks when you're in the self task this higher prediction error that you need to deal with is you can't just make yourself disappear you need to keep maintaining this goal so maybe your brain created a weaker prediction and then it would just lower that prediction error by having a wider sort of I guess there's a lot of between stronger, weaker and wider my specific professor was very in the predictive coding theme looking at things from actually not a continuous Gaussian but more of a discrete level and a lot of this is about detail the level of detail of your prediction so if you have a less detail prediction you're gonna decrease your prediction error if you say and we'll talk a little bit more about that when we talk about psychedelics because it's very relevant there but if you're just saying, okay my hand needs to be around here versus your hand needs to be exactly at a specific point that's another way to decrease prediction error so on the self task that might be what was happening your brain was just decreasing the detail of the prediction in order to decrease the prediction error which would result in worse behavior versus the follow up task your brain might have reduced the prediction error by actually reducing the visual perception and getting on the relative right between proprioception to visual actually getting more of this proprioception creating these closer feedback loops with where your body is in space and getting this more independence between your hands because you don't need to have this single task anymore you're just following and an interesting finding I don't have the quote of the paper here but there was a paper that did the exact same task but with eyes closed and just touch so people had their eyes closed and we're just touching something that was moving in a bimanual independent manner and they were all able to do it it suddenly from becoming a very difficult task became a very easy task so this really shows that sometimes and I think anybody that practices movement it's sort of a common trope but there's a thing of like thinking too much so goal oriented movement can actually, if we are taking this goal oriented movement and putting it in a way that's too detailed we're telling our brains create these movement paths and create two separate movement paths that's a really not naturalistic state for our brain to be in versus when we're just like, okay, we're following things that's a very naturalistic the low level feedback between our motor cortex and our proprioception and our sensation can just work so yeah, that does all sort of hopefully understood now another finding that happened was that there was a much higher variance like I was expecting this to be a very hard task for everyone, that's what the literature said but I found that, and again, this is a small experiment I'm about like 13 psych students, right? All healthy subjects, young but two of them were very able to do the baseline task and that was a surprise some of them, the task was so difficult that I could barely use the data because they were literally just moving one hand the other hand was just freezing in space and the other hand would move so they weren't even able to maintain these two goals of move two hand in separate ways at all so that really goes to show like just how different our brains are one of the interesting things to compare this experiment to is binocular rivalry which has been written a lot about in that predictive coding literature so for whoever doesn't know this, binocular rivalry is when we project two different images onto each eye so one eye is seeing one image you can see in the picture the right eye is getting a red image with one angle of direction and the left eye is getting a different one now what happens to your brain is it actually switches your perception switches between seeing one of the images and the other images with some short phases usually of like mixed perception but really just bouncing mostly between one to the next now here too there's a lot of variance in the population and one of the interesting things is that people on this autistic spectrum which I have endless criticism about the DSM and how we're classifying autism and generally mental health and different types of brain abnormalities but some people definitely have much more mixed states than others and different drugs can also affect the ability to have mixed states and it's sort of interesting to compare these two tasks because both of them are sort of creating a bi-stable system coming from an unnaturalistic task or a naturalistic input that you're not usually supposed to be doing because the way our brain was really meant to be doing motor actions is around high level goals completing high level goals, right? Give me the water, turn on the thing, grab, bang there's always a high level goal that's oriented towards one of our higher level needs but like a specific need that you're trying to complete and with this task of telling people, oh, move your hand in this motor perhaps way and your other hand in this motor pathway it's very unnaturalistic for our brain very similar to giving someone these two different visuals which we never see in real life. So again, this is really correlated to this having a very strong hyper prior and in a binocular rivalry situation your brain has a very strong hyper prior. No, I can just be seeing one thing. This is how the world works. There can only be one thing in one place and then your brain activates in order to minimize this prediction error it goes between, okay, I guess I'm seeing this one now it keeps getting prediction error. Oh, I guess I'm seeing the other one. So it keeps jumping between these two states in order to minimize the prediction error. Now, the same thing quite likely is happening with this bimanual interference with having a very strong hyper prior. Oh, I'm only supposed to be doing one task at any given time. You're just giving me two different tasks to complete at the same time. I'm gonna be switching between them. Now, possibly when you have this different sense of minimal self this can actually create different activation patterns and let you be more in a mixed state more in an independent state have basically weaker hyper priors around either what you're seeing or what your task abilities are. So this might be an interesting way of generally understanding a lot of what our hyper priors are doing to us and just the variance in the population might be around these hyper priors. So let's now let's get into like more like what are these possible implications and how can we use this data to do cool things with rehabilitation and even just training and getting me to be less clumsy. So one of the things that comes up from this is really naturalistic design of movement goals instead of telling people move your hand at this angle reached over here. And we see this a lot actually and in traditional martial arts they give movements names of activities. I can come for you have stroking the horse the horse is main and karate we have a move that's called breaking the wall or breaking the gate. So having these goals that are really oriented towards task completion versus goals that are too low level are much more likely to activate this active inference in ways that we want them to. When we put the goals into too low level patterns we're then creating this sort of a naturalistic activity and maybe even like slowing things down. Like the fast parts of our brain that deal with bringing together these sense modalities of visual sensation and motor actions they need to stay down there. They need to not be part of this high level self network part of our hierarchical network. And I think that's really interesting because it really is like what I was saying every martial arts teacher my whole life has told me stop thinking and I like I don't know what to do about this how can you stop thinking? But part of it is letting these low level parts of your body do the movements while maintaining high level goals that are not oriented to exactly oh where's my elbow is it here or here where am I holding my fist? Like all these little details work much better if you give them high level metaphors somehow that your brain that the lower levels can then take those predictions and minimize prediction error. So another interesting part of this is that it might actually explain some of why mirror box therapy works and why mirror box therapy is now something that's also being used in VR. And so for whoever doesn't know mirror box therapy is used on amputees and people that have pain in one hand like chronic pain or even stroke people that have suffered through a stroke. So what you do is you take the body part that is actually okay that doesn't have pain that is working and you use a mirror to then get your brain to believe that your other body part is also moving and is healthy or even there if it's an amputee. And this was actually figured out by I think the first person was Ramachandra working with amputees that were suffering from just phantom pain. And how do you get rid of pain from a limb that doesn't exist anymore, right? This was like a top down prediction from the injury that was just not getting updated. And you see this a lot in chronic pain. Really you see the neural circuitry even change as the predictions for pain just cement themselves and become biases in your brain. So by using this type of mirror box therapy it helps relieve the pain. So why or how is this connected to what we just saw? Because the visual modality is stronger than the proprioception modality. It's not too far fetched to believe that the visual modality is also stronger than the nausea receptors, the pain receptors that we're getting. So if we can actually cause your brain to believe and just by seeing cause that visual perception of a healthy hand, it might be enough to mute the same way your proprioception was muting to mute your nausea receptors and then reduce the pain. So that's really interesting. And it's interesting to see they are also starting to use this with stroke victims to actually help them rehabilitate their hand. And that's really interesting because, right? That's a little bit not in accordance with what I saw from my experiment but it might be different because they might not be getting current data about their hand. I think it might be really dependent on what the stroke has actually damaged in your brain. If the stroke part has damaged your ability to receive any type of proprioceptive data then maybe getting that visual perception might help reprogram that. There might be less of an interference going on. I'm not sure it's sort of interesting to think about. So let's see, what are we left with? Oh yeah. So now there's basically that third condition why I gave this crossed hands condition. And I think that's really an underutilized mechanism generally in any type of rehabilitation and training messing your brain up even more in some ways. And I think that that's a very strong intervention that we could play around with a lot more because it's 100% we're creating more prediction error that your brain needs to deal with. Now how your brain will deal with that prediction error? Will it actually remodel with a creative inference? Will it reduce some other sensation? It might really depend on the task on your brain on how we do this but being able to increase prediction error in your brain is a very powerful ability. And I think that's something that really is worth looking into more when we're talking about rehabilitation tools. And we see these types of things starting to emerge in VR and we're with like total body replacements like I was talking about like anorexia patients putting them in a whole new avatar and creating a full body rubber hand experiment. So they're getting sensations on their body at the same time that they're seeing their avatar getting sensations and it's been proven to help them remodel this top level belief an accurate belief that they have about their bodies being overweight even though they're not. So that's very interesting but I think there's really a lot that we can do with massively increasing prediction error in the brain. And you see this in a lot of crazy experiments, right? Like the upside down glasses experiment that people just within a short time were able to totally adapt and live in a world where everything is upside down or with like training people on bikes that work in a totally different direction. So sometimes by making things harder we're making things easier for our brain or we're forcing it out of this minimal potential well that it might fall into. It's like, okay, now there's too much prediction error it's time to actually change because as we've been talking about like this minimal prediction error if you're in a stable state where you've minimized prediction error that's not always a healthy thing for brains. And side note, this is also a model of depression basically not having enough prediction error in your brain falling into a too stable state a low entropy state of your brain. So increasing prediction error in many ways can be very important for our physical health our mental health. And it's something that I personally try to hack on myself all the time. I'll do like dark showers so I can't see anything. I'll try to use my left hand to use toothbrushes just like very basic daily activities that you get very used to doing to put another constraint on yourself and make it like a fun game. It's also a great way to train and any type of physical task. Okay, so yeah, we talked a lot about the visuals dominating the proprioceptive but why, why is that? So there's a paper from 2009. I'm not gonna, I don't know how to pronounce that you but their theory was that it's around using tools that in order for us to be able to use tools we need to be able to not trust the proprioceptive sensation as much as the visual perception because we don't have data from any proprioceptive data of any tool that we're using. So that's a very interesting theory. But from my paper, I think there might be something actually deeper than that going on. I think and this might be interesting to figure out a way to test this on animals that don't really use tools but I would expect it to possibly and this might be very different because some animals, you know, actually smell their smell world is much more dominant. They're hearing but I think that for this goal oriented movement where we need to coordinate many body parts, you know, you're driving a car, you're just even getting out of bed and getting dressed or walking, all these things require lots of body movements working together. So in order for that to happen, we really need to reduce the proprioceptive sensation so we can get this high level goal to dictate the lower level hierarchies of the brain to actually pull towards the active inference. If we had very strong perceptions of where we are right now, we wouldn't be able to move into the next state of where we want to be. So we're taking our brain is taking this high level goal and breaking it into movement predictions and for those movement predictions to be able to move your body into where we want it to be using active inference, we need to weaken where you are currently and have a weaker now in order to get to the future of this goal oriented movement. That's sort of how I see it at least. A sort of interesting thing that I'd like to talk about and we can also open this to discussions and questions at the end is how this connects to flow state and improv and dance and play and all these things that I am totally infatuated with and want to spend most of my time doing. So flow state for people haven't heard this term before it's a term that's been coined when athletes or really anybody that's good at a specific task is at their peak. It's been sort of defined as being read from self consciousness. Now a way to look at it through this lens of these experiments and these hierarchical levels and predictive coding and active inference is that it's basically inhibiting actually these higher level abstract goals and letting us get hyper accuracy was in this proprioception modality. So really what we call being in the now, right? Getting incredibly fast feedback loops that aren't about the end goal. It's not about if you I think talk to any of these athletes it's not about winning, it's not about getting to the end it's just like this pure joy about being in the moment and letting your body be, right? And a lot of this really might have to do with inhibiting this higher level of our neural hierarchy. So we can get this hyper accuracy of proprioception so we can create these fast feedback loops between our body and the environment to get to this like higher level of performance. And this takes us to more of like improv dance and play that are really non-goal oriented, right? This is sort of the definition and how I see, you know there's a big difference between games and play. Games there's always like you're winning you're going towards a thing or you're losing but in play there's none of that. There's just exploration. The same goes really with improv dance compared to I don't know ballet or choreograph dance you're just exploring the possibility space of what you can do. And that at least for me and I think well not only for me this is quite widespread anybody that is into these things I think finds them the most joyous and you know kids find them the most joyous and it's a very, very healthy part for our brain to not be in a goal oriented movement pattern and to just let go and explore without having a high level active inference happening. Active inference is it's still happening but it's being sort of driven more by the bottom up signals and really the moment to moment possibilities without having this like okay, let's gonna do let's do like a pretty move. Let's do a pirouette. Let's do something. Yeah, so I don't think I actually wanna show this video right now but it's more about focused on VR but if you wanna look at it, it's a cool video that's called this is your brain on VR that we made is predicting the sound and stuff won't work but it sums up a bit about what we've been talking about how different modalities get expressed to activated in your brain on VR reprogramming your predictions and how we can play with these things. Okay, so yep, this one, this was it. That's awesome. Wow, I wrote down a ton of stuff so shall I ask some questions and then we'll head into another part. Many places to jump in I really just caught onto this weaker now like a weaker current moment in order to get further. So that way you connected that to accessible experiences like the play and the martial arts really awesome. So when you talked about active inference kicking in what is happening there and what did you mean by active inference as far as the way that an individual person might relate to it? In what like domain or give me more? You talked about predictive coding and how active inference would kick in or would go into a different mode depending on whether a task is being selected at a high level or a low level. Yeah, exactly, we're always gonna be, I mean, there's always prediction error in our brain depending on what's happening. Now, are we gonna minimize it with what? There's again, these three different tactics that your brain can implore. One, reducing sensations. Two, updating your prediction. Three, doing an action. So when you're just in the free play there's still a need to create a prediction of your movement pattern for it to actually happen. But it's happening at a much lower level of your neural hierarchy. It's not happening because there's a high level part that's saying, oh, I wanna look pretty. I wanna get my teacher to like me, so I'll do something. It's happening at a level of the sensation, at the level of what feels good from the proprioception, the haptic, the pure like your body knows what's joyous to your body or these lower level predictions and biases that your body has about what is joyous, healthy for your body. The knowledge is there. There are these lower level biases that we're allowing our system to collapse into a state that's being led by these lower level biases. And this goes, I think like in both ways this can go very much into like your body knows and has biases around what's healthy and fun and feels good. But there's also, that's where trauma is stored too. Is that an amazing book? I think it's called The Body Remembers and both emotional and physical trauma really are stored in our low level biases of our prediction around our movements and our body. So that too can be a big issue for many, many people that need to deal with this. And being able to go from a state where your brain's triggered, right? And the mental health basis is called like when you're, when you've been triggered and these negative biases appear that are predicting, oh, pains, how it's gonna happen even though there's no objective reason for that except your history, something's triggered that prediction. So being able to switch into that place in your brain that knows what feels good for your body and what's healthy for your body is super important. And we don't really know how to do that in a very reliable way yet. There's mindfulness practices, breathing and stretching and massage. But I think like my biggest fear is that we're with technology, we're actually being pulled away from our body. And that's generally what I'm trying to close that gap. Wow. Does that make sense? Thanks for this answer. And it's actually related to this next question. You talked about how when the avatar was getting almost like false, perfect feedback, it didn't improve the actual performance. So it really highlighted how the technology can't just simply mechanically demonstrate like a scaffold. It has to give somehow accurate feedback yet it's often fantastical scenarios or abstract scenarios. So I wondered what were your thoughts for technology design? How could technology be used to actually enable and enable these kinds of states that you're getting towards? Yeah, without constraining us in that way. Yeah, so I think this was a very big learning for me, right? You need to give a person accurate feedback of where they are to and what they are actually doing. And again, there might be an ability to impose two such things. Like you might be able to still be in a first person perspective with somebody doing the movements but also either feel with vibrations. Like if you try and use that's what happens. I just use the haptic feedback to tell you if you are in the right place, you'll get a vibration. So I'm strengthening your ability to know where you actually are. So there's sort of ways to hack our other sense modalities with music really is a bit of like demification when you're on the path of doing these things that where you're trying to learn, activate the dopamine, at least within the predictive coding framework it's really like a precision signal for prediction error. So like give some more bling, bling, bling with a little bit randomness when you are doing the things that you're trying to learn correctly. But you need to get that feedback of when you're doing things correctly. And I think that that's what was missing from, oops sorry from this experiment was giving and the faster and tighter these feedback loops are the more your brain will perceive them and actually activate them. So there's some really great meditation tools that you'll meditate and afterwards you'll see your heart rate. There's some more professional like bio feedback for them that you can see what's happening either with EEG or heart rate in real time. And that's also actually I talked to you before we started like I'm starting my first hardware project to do that and give real time bio feedback of heart rate with LEDs if you want to go it's called the wisdom truffle you'll see a trippy truffle and basically that's the idea of helping people reach some of these tighter feedback loops on a lower level by hacking their visual modality. Like let's use it, we figured out okay the visual modality is super strong. Can we connect it to other sensations that we don't have as much insight to? Like could I connect the visual modality to heart rate? Yeah, pretty easy. Could I connect the visual modality to perception? Yeah, if you go to Mew, when you dance and move your avatar changes color and the sparkles change color. So by hacking the visual modality that really is like our brain's graphic computer it's like super strong, it's dominant. Can we use it in different ways to connect sensations that we don't have as strong and are not as dominant in us? That's one of the things. Awesome and that point you made really about the high level priors like both my eyes are looking at the same thing and you can kind of play with that by pushing on one lightly and it just does distort the image and it just demonstrates it's one of those examples like the high acuity in the periphery or color vision in the periphery or the blind spot. One of those examples that help bridge between, for example, input, output, information processing, understandings of cognitive sciences versus this really extended and predictive way that you're talking about things. So it's really awesome and thanks for making it really accessible with this presentation. So you wanna go to the second part and then if anyone has questions live otherwise I'll just keep on writing cool things down. Thank you. So okay, I'm gonna... Context, context reset. It's like context switch. There's a counting switch but we're still staying within the predictive coding framework and active inference. But yeah, I'm gonna just give, I really like this idea of giving visual metaphors based on all the things that I've talked about. I think that's a great way to learn and have this intuitive understanding both of math. I don't know if you follow the three blue, one brown videos but they're just phenomenal and mathematical understanding because it's all visual coded math. But I'm gonna give a visual metaphor. You can either look at the images or even close your eyes and just imagine that when your brain is creating reality it's very similar to playing in a sand with some buckets and making sand castles. And the sand is just this constant incoming bottom up signal. There's just so much of it. Constant sand is just flooding into your room, into your space and you have these buckets and you're trying to take all the sand and like fit them into the bucket constantly to create this reality really. And this is pretty much what your brain is doing with at least in the predictive coding framework with the buckets being these top down priors and the sand being the bottom up in incoming signals. So there's a constant buckets trying to predict and put the incoming signals into a specific bucket that then turns off the sand basically. Now let's talk a bit about the classical psychedelics. They're starting to expand to be like basically everything. But classical psychedelics are whatever activates the 5-HT2A receptor. Agonist just means activating. So what really happens when we take a classical psychedelic is that your brain's buckets are getting broken down into smaller and differently shaped buckets. So your priors are getting broken down, they're getting diffused into sort of weaker but more accurate or more detailed buckets that let your brain create an actual different reality for itself. Now, where is this coming from? When we look at the actual microcircuit of the brain and this is of the cortical, you have seven layers of your cortex and how the neurons are actually laid out there. This is a very amazing paper by Bastos et al. We can see exactly the feedback loops that are correlated to predictive coding. Where are the, where's the data going upwards which we see in these forward connections in purple, it's coming in from layer four, it's going up to layer two and then looping to layer five. And then where are the backwards connections that are coming from these higher level brain areas that are correlated to our predictions? And we see that in green. Coming from layer five, going down to layer four, doing these feedback loops. So when we look at where the 5-HT2A receptor is, we see that they're situated very specifically on these layer five on the backwards connections basically. So we're seeing that they're activating, hyper activating and making the sensitivity of these neurons much more sensitive. So that's why the theory is that it's causing these predictions to fire at a much lower threshold. If beforehand, let's imagine a whole neuronal population, you'd need to get X amount of neurons resonating together to create a prediction. Now you need much less of them. Just a few neurons are like throwing out their own prediction. And a few other neurons are throwing out their prediction and you're getting many more competing predictions. So very simple metaphor. If you're walking down a forest in a usual state, you might have a probability of 0.4%. Oh, I'm gonna see some animals and a 0.6% prediction of I'm gonna see some plants. But after you've taken psychedelics and these predictions have decomposed, you might get a much more detailed prediction possibility space, maybe you're gonna see a dog, a cat. Hey, maybe you're gonna see an elf. That's a low probability, but it's not only there. So things that were below the threshold of probability are now becoming a higher threshold and part of your predictive probability space. Now, what really happens here, and what's really sort of important is how this correlates with the rest of your bottom-up sensations. If you are in a very, getting very clean bottom-up signal which almost never happens, but let's imagine a very clear auditory signal, at least for a moment or very clear visual perception that you're getting, suddenly you can have much higher accurate predictions. And this is people on psychedelics, including myself, will report this quite often, like, oh, wow, I can see footprints in the forest the way I've never seen them before or this perception of hearing music and the audio is just so crisp and so detailed in ways that I don't perceive them when I'm not on psychedelics. But if you are in a bottom-up environment where you're getting noisy signals, it's dark, things are constantly changing, there's various sounds. Suddenly, the possibilities and the predictions that you might be getting might not be as correlated to quote, unquote, objective reality. Suddenly, your predictions can become much stronger and just, or much less accurate, but still a type of overfitting might happen. That's, I think, a good word to use here, where, you know, oh, in this forest, it's dark and gloomy, there's something moving, oh, it's an elf, right? Because that was part of the probability space of your prediction and with the noisy bottom-up incoming sensation that was enough to reduce prediction error. But what's important to note, and you can see sort of in this very simple image, that the decomposed predictions are all less strong. So they're not gonna turn off as much prediction error. So your brain is gonna constantly be while you're on psychedelics and the state that has higher prediction error, which, as we saw before, and we've been talking about, when you have a higher prediction error, your brain has a few different tactics that it can use to deal with it. One, modulating the senses. Two, updating predictions. Three, active inference, right? So this is why psychedelics offer a very, very unique opportunity of updating your model because there's a much higher level of prediction error. And then under some situations, your brain might decide to actually update the model. And if people are suffering from trauma and depression, we're seeing absolutely amazing results in research around using psychedelics to update people as biases around these things. And again, very much depending on the setting. Don't expect to take psychedelics and go to a place where you don't feel safe in and you don't have a support structure and to get these benefits. In order to update your model, you need to be getting bottom up sensory inputs that are very strong and accurate around what's happening to you right now. But if you do that and you are able to feel safe in your body like we talked before, be in place situations or improv or flow that might allow your brain to really, really heal deep traumas, to really reprogram deep biases from childhood when your brain had this extra neural plasticity. So now let's get to active inference. So the interesting part about these 5HG2A receptors is that we don't have so many of them in our motor cortex. There's a lot less of them. So we could actually say that active inference modulates psychedelic experiences and makes them weaker. And we see this both in the instructions that people are given when they're doing psychedelic therapy right now in maps, for instance, or other universities and hospitals that are researching this. They really tell people, lie down, don't move. And when I talk to therapists about this, they could really say, we know when something's bubbling up with a person, we see this in their body. When it's becoming too much, they start really moving and we try to encourage them to actually stay still because when we start moving, we're actually weakening the prediction error, right? Our brain is creating a prediction that's still pretty stable because we don't have as many 5HG2A receptors that have been hyper activated because we took psychedelics in the motor cortex. So our motor predictions are able to modulate this psychedelic experience. And the benefit of this is that, oh, wow, if you are doing psychedelics and you're feeling really overwhelmed, do a bunch of jumping jacks, push-ups, move around space, that's gonna really help lower the prediction error in your brain and help you be more back to, I guess, the reality you're used to. So yeah, this was a short connection between psychedelics and active inference. That I just wanted to bring to the space, too. Thank you. Yeah. There was so much, even in that... That I could... Yeah, you can unshare it then I will just turn. Here, my screen. I'll just... Yep, that's perfect. I'll just re-crop it. I guess I'll crop it while I ask this first question, which is, you pointed out that a lot of... Even the clinical work on psychedelics is done in a way where they actually prevent movement and also like education, stay at the desk. Like if they prevent, but don't encourage it. Yeah, yeah, okay. Totally. Like putting people in. Not restraint, but it's on a continuum with authority as a restraint. So just, yeah, what is maybe the role of the body in remote work? Or in, now that we're also in, again, not physically restrained to our chairs yet, but it feels like that some days. It's a disaster. It's... Yeah, I think we're... And I'm a total motor chauvinist when it comes to our brain. I don't know if you've heard that term before, but... No, we didn't explain that. Yeah. Basically, we have a brain and cognitive functions because we move. When you look at the designation between tree, to algae, to creatures in the crazy ocean, to animals that move, when do you start getting brains is when they move. And there's a great example of some really insane sea creature that moves around and has a brain, then it finds a place to plop itself down and digest its own brain. So once... Don't do that. Yeah, it doesn't need it. Our brain is needed to create predictions with the environment, right? This is the whole predictive coding and Andy Clark has a beautiful book and papers about the feedback groups that we create with the environment. And if our environment is relatively highly predictable because we're staying in the same place, then guess what, we don't need this. And us as humans, and Andy Clark also goes into that quite a lot, because we wanna minimize prediction error, that's part of this whole free energy thing in our brain, but we've done it a bit too well. We've created these large scale active inferences with our environment shaping our supermarkets, looking the same as the roads that our house and I think that's really a big cause of generally the mental health crisis that we're seeing. And especially when it comes to then not even moving your body, being in the same state, physiological state that's just too low entropy. And that's really, really, I think I'm gonna even go to the extent of saying, like my main goal in my work is that one metric improve that amount of states your body is in for humans. If I can do that, then for modern citizens that have a total lack of this, I am absolutely certain that we're gonna see improvement on many metrics that we're lacking. So when it comes, and I try as much as I can to not fall into this, I've been working remotely for even before this crazy COVID thing happened, but do calls while you're walking outside or stretching outside, anything that doesn't require you to be in front of a 2D screen, yeah, bring toys, yeah, create, like we do have the ability to gamify and bring toys into our environment. Like I have rings outside and punching bags and anything that will create constraints around your movements, you'll move your desk, change your table, don't work from the same space if you possibly can. Just keep bringing in that prediction or both into your body and I think that's a very incorrect way that ergonomics is looked at. Ergonomics is all about like, let's find the perfect position for you to be in for many hours. That doesn't exist. We're not supposed to be in a perfect position just going like this. Well, it's like a modern mental ergonomics and it's the ergo, the work, the dynamics of a dynamical system that's us. So the ergonomics is now the back angle for the chair, but then there's actually a ratio of the chair and the exercise ball or whatever. That's not my specialty. But it's all about changing those ratios because whatever, there is no such thing as the perfect sitting state or not moving state for your body. Lying down too much gives us pressure problems and people in hospital need to be turned around to not get these things. We are a dynamic physiological system and I think there is a very big gap between people that are studying these systems a lot of the times and like math people and computer science people that are just really not spending enough time connecting to their own bodies. So I think like I highly encourage everyone to like just find the time to figure out what you are as a full human being is like blood and blood pressure, bones, tendons and be in that space and let those prediction take precedence at least a little bit of your day. One thing you mentioned is this like lockdown almost standardization and synchronization and stabilization of the lower level states and then the two affordances we have are kind of like our physical movement and our communication. And we've seen how those are subject to top down as well as bottom up constraints but that is kind of what's being remapped is the constraints on our movement variously and our communications networks variously. And so to have a multi-scale framework like active inference where we can talk about the remote team having input and output and then the body we can be like, okay, here's an algorithm that's awesome for this sector, for this team but I wanna be not reinforcement learning driven or not reward driven. I wanna have a framework that allows me to express these kinds of playful states or even have a rationale to construct these states and design for them. Yeah. So I think there's that was a lot but a lot of good points you brought up. So maybe first the communication thing, right? We're doing this 2D sometimes the only voice sometimes 2D screens that none of this is naturalistic for our brains, right? We're 3D creatures. We pick up social cues from being able to gaze into each other's eyes, predict where you're looking at and it gets worse the number of people you bring in. It's just like group zooms for me are torture. There's so much awkward silence, so much inefficiency, who's next, what is the level of it? Even when we're in a normal social group, I usually put little numbers around people how much I think they understood me. Okay, this person understood 70%, 30%, zero. And I do the same for myself and I try to be very honest with people. I'm like, I understood around 30% of what you just said. And that's how we are like, we have different just data structures internally and managing to like minimize the differences in these data structures is a very difficult task. Even when you're not working remote and remote work has definitely made that a lot more difficult. I've tried to use things like VR and found it sometimes okay, but also it's an I'm a super VR enthusiastic person and still spending more than 15 minutes in there at a period of time is just too much. It's heavy, it's clunky, it's a close point of view. So that's a difficult thing, but what we do try to do a lot of the times is create mockups directly in VR and share the mockups in VR. So we understand what we're trying to build together from this 3D perspective. Generally I'm, me, we have like an AR app where I'm trying to create interactive holograms. So more on the social connection instead of just getting you on like a 2D screen or the Facebook and Instagram but getting something that moves in my face and actually interacts with me. Like you can blow kisses. I don't know if you've ever seen the Roger Rabbit movie but I realized, I think I got it like I started when I was a kid and then I built this thing. I started the movie again and I was like, oh wow, I just built the Roger Rabbit thing. Does it blow kisses through the screen or something? You blow kisses and it goes into your 3D space, an AR space. So you could actually move around your space and catch the person's kisses. So I'm really excited about the possibility of creating what's called joint action in cognitive neuroscience remotely. And joint action is a physiological thing more than anything. Like if I even shake your hand or just passing you that pen is when I start predicting you and starting to predict you is starting to build empathy with you. So these empathy things really are naturalistically based on very low level physiological data. And without this, because we're currently constantly on Zoom and I click here to send you a button, to send you a chat or I raise my hand by clicking a thing and you see a thing over there. It's much, much weaker and not really creating these joint actions. So I think there's a lot to explore with technology with just joint actions and team building activities that are in VR even if for a short period of time and just simple things like let's build blocks together or things that you would do in physical space if you could be together to just understand a bit more on this low level prediction of who a person is, how they move. Now VR is very strange because you can like mess things up. I don't know if you've ever been in VR chat but you can interact with someone as like, oh, I'm now a three meter giant with fire wings and I'm a tiny pixie and you're just interacting this way. So I think that's really interesting and actually can be a very interesting way versus the biasing things like, oh, what happens if we conduct interviews that are not like, we know, we know there's a huge bias in the whole hiring system and building teams to begin with that is not solved. There's like, okay, if you do, you don't even write sometimes HR doesn't look at the names and there's all these things to try to hide who the person is but the moment there's interviews you know who the person is and these racial biases come in, these gender biases come in. So there's a lot of really interesting ways that we could be biasing ourselves by just again, this excessive prediction error and the ability to really detach ourselves from the then normal perception that we have. That's really interesting about the physical components of remote, whether it's a physical object that's gonna be moving and I don't know, like a little jellyfish that does a movement or just synchronizing and it made me wonder if there was like an exercise, like could people just say, okay, beginning of this video meeting, maybe we all just switched over a context like let's do three claps and take a deep breath and just do and think of a, you know, purple cow and then just coordinate that way and then prepare or here's our narrative, here's what we're here to do because that kind of stuff, the mission which comes back to your earlier idea about like giving the information in the way that's naturalistic and what is the natural unit. It's kind of the goal, the end. And so having the group assemble with the synchronization and also a remembrance of the end is like really an interesting implication, I think of active inference but not some of these other sub-component theories per se. Yeah, and so that's also what I'm trying to do is this hardware is let people synchronize their heartbeat. Let me, you're over there, I'm over here, let me send you my heartbeat as beautiful glowing pulses and you can send me your heartbeat and get this biophysical data that honestly if we were even in the same room together I wouldn't necessarily get unless I was like touching you, right? So I think we are, and this is the exciting part of like immersive and embodied technology, like we are getting to the stage where we can start moving beyond 2D video into biophysical data that lets us get a deeper understanding, a deeper connection, a deeper synchronization with each other. Well, I'm happy to talk for a little bit longer if you want but I guess my closing thought or at least direction because maybe we'll see or hear from you again would be like what are the next steps for applied active inference or for an individual who's curious about these kinds of topics? Is there like a website or is there a research direction where is active inference coming into play in the next months to years? Yeah, so honestly before I heard about your podcast I've never heard about any such like focus within the predictive coding framework. I would be very excited about connecting and this is something I've been trying to do is bring movers into this space. It seems that there's these two different worlds. There's movers that hate technology because they feel it's taking them away from their bodies and there's technology and science and there's a very unfortunately very small overlap between them. And that's what I was very lucky. I found a co-founder who's in both these worlds but if there was a way to bridge this gap and like you're saying these movement exercises it's a thing that we do constantly in improv classes building physiological synchronization and the things you'll see there sometimes are just like this is one organism. You know like people moving together, not pre-decided. And there's a beautiful research paper I think Noi it's from Israel, the faculty I can link to it afterwards. But what they did there is they tried to figure out what's going on in the brains of improvisers and they have a beautiful mathematical paper about the double feedback loop that's created. So it's not and they tested two conditions leader versus follower that was more what I was testing versus there is no leader and follower. You're doing things together and they found a much higher degree of synchronization and they built a very cool paradigm of just like moving your hand on one up and down. That's it very easy to measure but it was there. The data was there. They managed to find a much higher degree of synchronization when there was no leader and follower between improvisers that were either musical improvisers or dancers and trained their brain to tune in to the other person. Because when you're doing this and this is a very, when you're talking about teams and synchronizing and play, I think we really need a shift in the hierarchical structure of teams. Like I don't see and this is something that I'm really exploring as a CEO of a startup. How do I get these ideas that I want out there but how do I bring in people's ideas so they feel that they're part of this and how do we then get a cohesive product at the end of it? Not none of these are like easy questions but having this ability to double feedback. So it's not just the top-down manager, right? That's saying, oh, this and this and this you guys understand this and this and this. This is my vision. Don't expect people to be excited about your vision, right? That's not how these things work. If you want them to be excited and attuned, you need as a manager in this hierarchical structure be attuned to their visions, what they're seeing, their perspective. Why did they join this project? And try to get that coordinated into a global vision that is doesn't break apart because there's too many voices. So none of this is easy but I think that a lot of the times, yes, going to the low-level physiology first and building this low-level physical trust with each other and understanding with each other is really unique. Like me and my co-founder, we dance together. We climb on trees together before he moved to Hawaii. Now we try to hang out in VR and do things together but it's really not the same. So figuring out how to do these joint physical actions together, I think will really get us into the future somehow or a future I want to be part of, yeah. Nice or a likely future or one that I expect to be finding myself in. But on that tech and movers and sort of the transdisciplinary nature and the potentially productive conversation, we kind of think about these guest streams like it's too directional because maybe for those with a physics background or a philosophy background, you've inspired them to think about how they're doing their physical activities or wanting to go deeper into a knowledge or movement tradition they were already a part of and then there's people who that's their daily life and then maybe this is the first time they're hearing about active inference. So we can always do like event series or discussions or working groups or events like that or projects like that to help make that two lane freeway like accessible for everybody and respect both directions of the interchange. And it's a really important part, like you said with the moving away from the hierarchy of disciplines. Is physics the top dog discipline? Physics is awesome, but it's actually a really I survived a broad conversation. A BS in physics, it was dramatic. It's great and then different people we draw into different parts and that will be sort of like the colony phenotype. And when we constrain what the lower level looks like we increasingly infringe on what the higher level can be as well. So it's about that balance. And so that's a common theme with these active inference discussions. Well, yeah, this is awesome. I really appreciate this conversation. And it was, I'm sure something that a lot of us will really listen to and think about. So we hope to maybe interact with you or other communities in the future, but this was really fun. So thanks a lot, Sarah. Thanks so much. Peace. Bye.