 I'm interested in the kind of computational principles that underlie the ability of the brain to perform efficient and effective inference and prediction under kind of under the challenging conditions that characterize the kind of daily life of animals and various other organisms. So with this, with the series of so I've, in terms of the, you know, just to to just give the okay to the recording. Yes. Okay, great. So to jump kind of straight in to the lectures. So the title I gave for the for the series of lectures is Bayesian models of perception. So the kind of the general idea that I wanted to introduce with these lectures is to so I wanted to kind of expose you to the to the idea that a perceptual phenomena, so the phenomena related to the sensory perception in particular can be conceived and kind of fruitfully modeled as statistical inference processes. So, and so the idea is to kind of let me just the core idea. Here is perception as statistical inferences, you want. So this kind of the core idea that I want to talk about. And so the the theory of the physical inference can be perceived and kind of fruitfully modeled as statistical inference processes. about. And so the main things that I want to go through in the next few hours are, so we're going to spend a little bit of time giving a few plausible ideas as to why this could be a good idea, why if we are interested in the study of perception, of sensory perception, why could it be a good idea to kind of moderate and conceptualize it as a process of statistical inference. Why? Now we're going to spend a little bit of time thinking about the how. So what are the quantitative methods we're going to use to kind of build these? So we're going to build a framework for kind of tackling general problems in perceptions, in perception using statistical inference. And then we're going to see a few examples of how this approach can be used and fruitfully employed to kind of get an insight into how, if you want, the mind works or how perception in animals work. OK, so let's start a little bit with some terminology. OK, so I'm talking about perception. So what do I mean by perception? So by perception, I mean a purely cognitive process. So by perception, I mean the cognitive act of coming to some conclusions about the state of the world based on some sensory evidence. Let me actually write it down because it's important. Perception, coming to some conclusions, the state of the world based on sensory evidence. So importantly, I want to distinguish this idea of perception from the concept of sensation. So by sensation, I mean a kind of like a lower level set of phenomena. And when I say sensation, I mean mostly like the set of processes that involve the activation and response of the sensory organs that follow kind of mechanistically the kind of exposure to external or internal stimuli. So this would be like activation and response of sensory organs following external or internal stimuli. So there is this idea that sensation is some sort of lower level description of what we're thinking of. And perception is kind of the higher level phenomena that we're interested in. Just to give a more concrete kind of a concrete example, as I kind of move my gaze around the room, perhaps the sensation process that is involved in this is the kind of activation of, say, my retina following the varying patterns of light that are impinging on them. Whereas the perceptual phenomenon associated to that sensation process is the fact that from this kind of sensory input, I can kind of make sense of the fact that I am standing in this room. There's a bunch of objects in front of me. I can see a bunch of colors. Those things are chairs. I can see the two tables. I can see a number of people. I can recognize some faces and some other faces are new. So this is kind of like the perceptual phenomenon is the fact of kind of putting the sensory information together to kind of extract some knowledge about things that are out there in the world. So if you think about it, this is, of course, a very important problem for animals and life forms. And yes? I have a question. What's the longer you think external and external? Yeah, I mean, it's not particularly important in this case. I mean, I want to say, in turn, you can think of things like proprioception and things like that. It doesn't really matter. I mean, I just wrote it down to kind of cover all bases. But it's not particularly important. Does it answer the question? Yeah, I mean, do we have internal perception and external perception based on where is the system when we have them? Yeah, I mean, of course, you have perception. I mean, if internal, you mean, for instance, things that originate internal to your body. For instance, of course, we do like animals have internal perception of things that relate to monitoring the state of the body. And even at the conscious level, you do have this. And even at the unconscious level, for instance, you have a sense of the position of your own body, which is something that is kind of like extracted from a very complex set of information that are coming from all the activation of various muscles and so on and support joint angles. So yeah, these type of things. This is a very old example. OK, so I was saying, it kind of makes sense if you think about it to conceptualize this process as an inference problem. Because so kind of your brain, for the most part, is locked inside your skull. And it doesn't really have access to the things that are out there in the world, right? The only thing it has access to are a bunch of electrochemical signals that are coming in through the senses. And they are themselves originated from some set of sensory transaction processes. But so this is kind of this kind of indirect representation of the world. And what the job of the brain is to kind of control the actions of the organisms, of the organism in such a way that the organism will be able to survive and to reproduce. So there is a sense from an evolutionary perspective, there is a sense in which it is reasonable to suppose that there is a kind of like a strong pressure for the brain to be able to obtain useful representations and useful to be able to extract useful information about the world that can be put into effect for further action and extracting information about things that are out there and that can be useful for action and survival. Is that like, does it make sense? Right, OK. Yeah, we have a question from the chat. Thanks. One sensation seems like a singular aspect of taking information perception perhaps should be considered to have various levels of layers. OK, so I can repeat the question maybe. So the question is that sensation seems like a singular aspect of taking information. Perception should be considered to have various levels of layers, is that true? So I think that you can decompose any of these phenomena on the basis of various levels of abstraction. Even in the end, even the boundary between, I mean, these are useful kind of concepts that we can think of, but even the boundary between them becomes a little bit blurred if you're looking to them more closely. And but yeah, it is definitely true that you can think of, especially perceptual processes as being structured along kind of hierarchies of, you know, you can think about the fact that when you're looking at something, you may be, for instance, seeing, I don't know, say the texture of some visual object. I'm looking at this table. I see the visual texture of this object. And then I'm seeing that this texture is part of a surface and this surface is only one face of like this big block of wood. And then this is actually part of an object, which is part of, you know, there is like concept. This is like an example of saying that, you know, things are made of parts, which of course have a kind of like a natural hierarchy to them. But then again, there is even like another level which you can think of hierarchies where, you know, you can think of what you are perceiving as just the object in itself or also what it kind of represents in terms of possible use that the object can have or possible relationship to you and to the other people in the room. So definitely you can think of these things as an organizer along multiple layers. I hope, yeah, okay. Great, so, okay, so I was saying that there is this idea that, you know, from an evolutionary perspective, it makes sense to think of perception as something that, you know, as this kind of inference problem where the brain has a really strong pressure towards performing well. So, but what does it mean to perform well this type of, you know, of inference task? If you think about it, one key aspect of being able to, you know, piece together a useful kind of, you know, to extract useful information about the world from some noisy and sparse sensory data is the ability to use existing information, right? You need to be able to reuse things that you already know about the world. You cannot walk, you know, wake up every morning, you know, in a brand new world that you have no idea how it works. And let me just, you know, bring up a couple of examples to show you what I mean, hopefully in a decreasing order of kind of stillness. Let me see. Okay, so consider first this image here. So imagine that you're, you know, you're walking down the street, you look up and that's the one over there on the screen is the scene that you see. So that's what you see in front of you. And I don't know if you can see it, but over there in the distance, there is kind of, there's a human figure, right? It's kind of very blurry and fuzzy and very small. And, you know, as you look at this scene, you may, you know, you may look at that person and, you know, you may wonder if that person is somebody that you know or somebody that you don't know, like you're just totally strange. Now, consider the following scenarios. In one scenario, you may, for instance, you may have that figure could be compatible in terms of shape and size and height, could be compatible with the general kind of size and shape of one particular friend of yours, okay? And maybe in one scenario, you had, for instance, agreed with that particular friend to meet at that point, you know, in that position, in that location and at that time, okay? So you're actually waiting for some person that actually kind of looks like that. So as you look down the street and you see that shape, which is vaguely compatible with the shape of your friend, you may think, oh, there he is or there she is, right? You may perceive directly that little figure as the figure of your friend. On the contrary, if you had just gotten off the phone with that friend and that friend was telling you that how they were out of town and they were away for the holidays and they're for sure not there. As when you look down on the sidewalk and you see that, you're very unlikely to perceive that figure as that of your friend, you know, at least in absence of that higher quality sensory evidence. Of course, if you walk up to them and it's actually your friend, you may want to ask them, why did you tell me that you were away on holiday when you're here? But, you know, there is a sense in which essentially what you expect to perceive will influence what you're actually going to perceive. And this is a very trivial example, right? Let me show you another one. Okay, so this is a, I think the registration is kind of famous, just, you know, as a quick poll from the audience, what you have to choose, you know, what animal is that, duck, rabbit? Okay, great, we have a bunch. Anybody who thinks that this is neither a duck or a rabbit, some other type of animal, crocodile? Okay, good. So typically, this is what happens is a drawing which is explicitly made to, you know, to release it kind of like, there's two ways in which you can look at it. It can either look like a duck or a rabbit depending on how you look at it, great. And this is what happens when you ask people, what animal is this? People will typically split about, you know, if they have to pick one, they will, somebody will say duck and rabbit, somebody will say rabbit. They, to make another example of how your kind of your expectation influences your perception, people have performed this exact experiment like going around, showing this picture and asking people, oh, you have to pick one animal, which one does it look like more, like, you know, does it look more like a rabbit or like a duck? Around the, around Easter time and in certain countries, Easter is associated with rabbits, so people expect to see rabbits everywhere, you know, in shops and whatever. And, and, you know, and if you go and ask, in these countries, if you go and ask what animal is this, you will get like a, there is a large proportion of people that will report that this is actually a rabbit because they actually expect to see rabbits when, you know, around them during their daily life and that period of the year. Whereas if you repeat the experiment around, I say around October, November, I think it was, then the proportion of people saying rabbit will, you know, go down by a lot because Easter has gone since several months and people are not expecting to see rabbits so much around them anymore. So this is another silly example of how kind of at a very high level, a very intuitive level kind of expectations can influence perception in this context. So let me now give you like a third and final example here, something which is a slightly more, maybe precise nature. So consider this figure. So this is called the Adelson Checkerboard. So I don't know if you can see, so my question for you here is, do you see these two tiles? There is, yeah. There is this tile here is called the, this is the tile A and tile B, you see there's these two tiles. And the, so for instance, my question would be, what is the color of the two tiles? What are the colors of the tiles? So what's the color of type A? Great, okay. What's the color of type B? Why? Okay, so yeah, typically the difficult answer you get, right? People say gray or white or black or white or dark gray and light gray, right? So that's entirely, I mean, that's correct, right? Those are the two colors that we see, right? When we look at the figure, the interesting thing, and I think that the thing that really highlights how perception depends on our expectation and the context. And also it highlights a little bit the distinction between sensation and perception is that the actual kind of raw sensory content of the color of the two tiles. So by that, I mean, if you think about the RGB value of the pixels that I'm showing you for tiles A and tile B are exactly the same. So if you don't believe me, I can put another bar here. So this bar here is just a solid bar of gray, right? You can see here, this is exactly the same color of tile A, right? And then you move your eyes down and now it's actually the same color of B. So this whole thing should convince you that the RGB values are the same. Are you all convinced? Yeah? Yeah, it's hard, right? I know, it's hard. I mean, myself, but it is the case, okay? You can imagine, I can give you the figure, you know, with Photoshop and measure, it's exactly the same, right? But then when I take it away, it is impossible to see those two tiles as being the same color, right? At least I cannot do it, I don't know if you can do it, but I can look at those two tiles and it's right. So why is that? So it's the case in which you have the sensation process here is exactly the same for the two tiles, right? You have the same color. But the perception is very different. And why is that? This is exactly the same thing as the rabbit or the friend over there. And the thing is that your brain is, so in technical term would be, it's doing what is called to be the discounting illuminance. So in a way, so your brain knows about kind of the physics of the world. The brain knows about optics and stuff and it knows about illumination and shadows. And because there is this kind of green cylinder here and there appears to be a light source over there on the right, it can infer that tile B must be in the shade whereas tile A must be directly illuminated by whatever light source is there. Yes, question. So if we hide the green objects, would we see? Yeah, actually, I should have had the version. There's a version of this without the green object. And essentially most people, I think most people, if you just take away the green object, most people will still perceive the shaded area as like the shadow of some object which is outside the scene because people will see the checkerboard and kind of expect it to be alternating colors because the alternative would be that you have a checkerboard where the color actually, you have the checkerboards and then the colors are also changing continuously which is a very weird object. Like it's not something you expect to see. And again, it's this idea, right? That you see what you expect to see. So, yeah, so basically, yeah, so that's the thing, right? So the idea is that imagine that, so your eyes are kind of registering the exact same color here, but then because you assume that illumination is different then the underlying properties of the surface must be different, right? So, and that's how you perceive the two different colors to be the same. So, yeah, so this I think is a nice example showing the perception, yes. Yeah, so that's a big question, right? So you can imagine, oh, yes, so yes. So is this question something inherent or is it something that is developed by observing the world? So this is a big question, of course. And I mean, you can imagine that this type of thing happens at many different levels, right? This is a relatively low level thing kind of like color constants the fact that you can perceive colors of the faces more or less independently of the spectra content of the illuminating light. But, you know, there's all levels to this and, you know, this can change depending on which particular phenomenon you're looking at. And, oh, yeah, maybe I'm just too far. Yeah, maybe I should just kind of get a little closer because I keep moving around. I'm trying to shout, but yeah, yeah. Okay, so yes, so basically you will find that some of this will be kind of inherent, kind of, you know, it is a difficult question. I mean, some of it will be due to adaptation of the brain to kind of the, if you want the statistics of the world during development of the organisms or the type of environment that you're exposed to during development, some of it may be hardwired at an evolutionary or an evolutionary kind of time scale. Some of it may be even shorter because even, you know, even on a much shorter time and maybe we're gonna see some examples later you may expect that you will adapt to the type of stimuli that you expect to see in a specific context. Like, you know, if you put you in a room, if I show you like a bunch of examples of things where, you know, a bunch of checkerboards with like smoothly varying colors, maybe after a while you will stop perceiving those things as different colors, you see. So, yes, it happens at all levels. Sorry, you don't, you have a same sensation for different perceptions. I'm not promise you have the same sensation. So by the same sensation, I mean that you have that, so you don't, what I'm saying when I talk about sensation is not something that you have subjective access to. So sensation here would be like the type of kind of pattern of activation of your retinas when you look at one specific set of pixels in tile A and tile B. So you don't really know what that is, right? I'm just making it. Yeah, what, when we sense it, the major of the department that sense only yes, part of the origin of the action. Yeah, yeah, yeah, exactly. Yes. So that is the problem. Yeah, yeah, exactly. So you can, for this particular image, you can make an argument that there is some sort of, for instance, contrast equalization going on already in kind of low level vision. Yes, I mean, you can make that type of argument. But I guess, so the kind of conceptual point that I was making here is that it is conceivable that, you know, if I make these tiles big enough, you may, or if I put this image close enough to your face, you're free to fixate on a part of the screen where you don't really get, you know, at the level of individual cells in your retina, you're not really getting any meaningful information from the neighborhood, right? My question is if the sensation is the same, the brain? Oh yeah, sorry, yeah. I keep forgetting to repeat the question. Yeah, sorry, my bad. Yes, thank you. My question is if you have same sensation and so there, we have the same brain and two different perceptions. We should find why does it happen? There is not any other elements. This is some perception. But there is another element. Sensation, brain, perception. Yeah, there is another element. The difference between the two tiles is that you know that one tile is in the shade of the green cylinder. That's what differentiates the two cases. So we have different sensation. No, but by the definition of sensation that I have here is that that's exactly why I introduced this distinction. What you're saying is that like according to the definition I gave here, what I'm saying is that you have two different perceptions. So you see two different colors, but there is no low level difference in the sensory content of the stimuli of the two tiles that justifies this. The main thing that justifies a different perception is the context, is the fact that one tile is in a particular position with respect to the cylinder and the hypothesized light source. Is this right? Thanks. Thank you. OK, maybe I'll keep going. OK, so this will just kind of wave in my hands and trying to convince you that perception is not a passive processing or passive representation of sensory stimuli, but it actually is a very much an active process that involves a lot of incorporating existing knowledge about how the world works. OK, then having said this, what we want to do is let me see how I can. OK, so yeah, let me just stay here because otherwise I'm going to lose the microphone. So what we want to do is to make some progress, we're going to need some quantitative framework to look at it. I think you can probably just put a larger phone. Yeah, OK, OK, I just. I should pick me up. OK, OK, thanks. Maybe I can put it more centrally. This isn't it. OK, so what? No, I'm getting to this anyway. So what we want is to make progress is a quantitative framework that allows us to do a few things. So we need to be able to deal with uncertainty in data because sensory data is noisy and corrupted by all sorts of things and scarce. We want to be able to, as I argued, to include previous information, previous knowledge or beliefs in the inference process. We want to, importantly, because I said that essentially this is your life depends on it. Reproductive success depends on it. So you want to do this optimally in some sense. OK, in some sense. I mean, optimally, if you want a decision theoretical standpoint in the sense that you can kind of take this information and extract as much of it as much of it as possible in order to take good actions based on it, but it doesn't really matter. So you need to do this in some sense optimally. So because of these reasons, the framework we're going to be using for reasoning about these ideas is that of base and inference, which indeed kind of satisfies these requirements that I have written here. So in order to show exactly how more in detail what this means and how we're going to go about it, let me make a little example. Like we're going to start from a very, very simple, very, very small, simple example of base and inference. And we're going to work up from there. So great. So consider the following case. Imagine that you enter a room and you look at the floor. And you see that the floor is shiny. Maybe like this one, this is slightly shiny. And in terms of kind of extracting information from this kind of sensory data, you want to understand whether the floor is wet or it's dry. So the fact that it's shiny is intuitively related to the fact that it's wet or dry, but it's not the same. So let me just write something like shiny, shiny. Well, I have to start wet. So we're going to be, so in order to perform inference in this case, we're going to follow two steps. So at the first step, we're going to build a, we're going to define and build a forward or generative model for how we think that the data can kind of come to be. So for how the kind of hidden or latent states of the world could give rise to the data that we can observe. So the first step is the forward model, forward or generative. This is essentially like a story of how the data can come to be, which is broken down in elementary probabilistic steps. So in this case, we have a very, very simple problem. We only have a very simple observation of the floor being shiny or not shiny. And we have a very, very simple kind of latent state of the world, which is the floor being wet or dry. So it's going to be a very extremely simple story. But it's going to be like this. So we're going to start by kind of encoding our story by encoding our kind of prior belief or prior knowledge about floors in general by saying that floors are typically dry. So we're going to say that in probabilistic terms, we're going to say that the probability before observing the floor itself, that the floor is wet, say in 0.1, and the probability that the floor is dry is, say, 0.9. Just before thinking of an observation. And then, as I said, this is a story about how this latent state of the world, the wet or dry, can give rise to any type of observation we may make, which is shiny or not shiny. We're going to, again, express that in a probabilistic form by writing that when the floor is wet, perhaps it's fairly likely, regardless of what the floor is, it's fairly likely that the floor will end up being shiny. So we're going to write it as a provisional probability. We're going to write that the probability of the floor is shining even that the floor is wet is, say, 0.8. And the probability that the floor is not shiny, given that the floor is wet, is, of course, 0.2. It's 1, 9, or 0.8. On the other hand, we say that when the floor is dry, we have different probabilities. For instance, it is less likely that the floor will be shiny. So we can say probability that the floor is shiny, given that it's dry, is, what did I choose here, 0.4. And the probability that the floor is shiny, sorry, not shiny, given that it is dry, is 0.6. So this is a very simple kind of generative story for how any possible state of the world, whether dry, could give rise to any possible observation, shiny or not shiny. So this is the first step of kind of building up a Bayesian inference process. And we can actually think about the fact that this little forward model, we can even represent it as a graphical form. So you can think of it as being. Probably, when you turn the way from the bottom. Yes, yes. Yeah, I'm trying to shout, but yes. Now it's off. Yeah, great. OK, so now I can also shout less. So when we are saying that we can represent our forward model in a diagrammatic form, we can write that essentially it's like this. It's the state of the world, which then kind of gives rise to the observation. And specifically to our case, the state of the world is wet or dry. So it will be wet or dry. And this gives rise to shiny or not. So for those of you who are familiar with it, this is just a probabilistic graphical model. This is the simplest possible probabilistic graphical model of the probability distribution that is implied by this model here. But it's not particularly important that you recognize that. Anyway, so this is just a simple kind of graphical representation of our little story, OK? So everything clear, I'm just making up a little story about this little hypothesis. I'm just encoding my ideas about how the world works. I'm just doing that exercise, which kind of corresponds to what I was saying earlier about there is some sense in which the inference process starts with some prior information about the world, OK? Then the, so let me just, OK. So yeah, importantly, this little story that we have, so in this case, we gave it a little bit of a causal flavor. So there is a sense in which the fact that the floor is somehow physically causes it to be more shiny sometimes, depending on the material. So there is a causal sense. This arrow, you can think a bit about it as carrying some sort of causal sense. But this is not necessary. This forward model can be an entirely descriptive model with no, that only kind of refers to, say, correlations that have no claim to actual causality. This is just for as a disclaimer. So the second step of our modeling procedure, in general, will be of actually performing the inference. Inference. So the first step, we describe the model that goes from the state of the world to the observations. In the inference step, the second step, we essentially, we start with one particular observation. We look at the floor, and we see that it's shiny. And then we invert the forward model to arrive at some information about the latent state of the world. So now, in order to do this, this is where essentially having a probabilistic description of our problem becomes very useful because we can reuse all the tools of probability theory. And in particular, the tool that we care more about here is, as you can guess, is base rule, which is what we're going to use to invert our model. So I assume that most of you would be familiar with it, but let me just refresh your memory. So if you have, so we're going to use drawing, actually, let's do this, start from some specific observation and invert the forward model with base rule. OK? So in our case, the observation would be, oh, the floor is shiny, OK? So in this case, we say, no, sorry, yeah. The floor is shiny in this case. This would be the specific observation that we made in this case. So I just started with a reminder, base rule. So if you have A and B are random variables, you have the probability of random variable A taking on values small a, given that B takes on values small b, is equal to the probability of b equals small b, given A equals small a, times the probability of A equals small a, over the probability of b equals small b, which is itself equal to probability of b equals small b, given A equals a, is the same. And the denominator, we can expand it as the sum. Overall, say, values a prime, b of b equals small b, given A equals small a prime, times the probability of A equals a prime, OK? So this is just a little reminder. So there are many, this can be derived in a number of interesting ways. Some of them are more philosophically kind of satisfying than others. For our purposes today, in the interest of time, we're just going to, if this is, you can just look at this if you're not convinced as a kind of a trivial consequence of the usual definition of conditional probability. So that's not, let's just leave that that. And the, so just again, in terms of terminology, where can I write my terminology? Maybe I'm going to write it here. So we have that the, so in the particular terminology here, we have the p of a. And let me just write p of a instead of saying p of a equals a, just so it would be the prior, what we call the prior probability for a. So then p of, say, b given a is the likelihood, the likelihood function, seen as a function of a, right? And then we have that the p of b. Actually, let me just write p of a given b in this context is the posterior. So it's called the posterior. And then p of b is simply a normalization factor, OK? OK. So this is just a reminder for everybody. And so essentially, we can rewrite the whole rule as saying that posterior is equal to prior times likelihood over normalization. OK. So this is just a little refresher. So and now because we have a probabilistic model here for our problem, we can just essentially apply this tool here to compute essentially the probability of the floor being, say, wet given that it was shiny. Questions until here? OK. I think we're good. So OK. So we can just essentially plug. So we can essentially just plug our numbers into this, into this expression. And let me just overwrite. What can I do? Let me just overwrite here. So what we're going to have is that so we're going to have the probability of, say, the floor being wet given that it is shiny. It's going to be equal to the probability that it is shiny given that it is what? OK. There is sorry. Bad choice of location. Let me just delete this and keep going. Yeah. OK. So this is going to be the probability of the floor being shiny given that it is wet times the probability that it is wet over. Then there's going to be the sum, right? So it's going to be the probability of it being shiny given that it is wet times probability of it being wet plus the probability of it being shiny given that it is dry times the probability of it being dry. OK. Now if we take the values from the table over there, we get that this is what? 0.8 times 0.1 over 0.8 times 0.1 plus what is it? Shiny given dry was 0.4. Yeah. 0.4 times 0.9. OK. And if you work out this number, it's 0.1 in 8. And by complementary probabilities, the simply probability of being dry given that it is shiny is going to be 1 minus that. So that's 0.818. OK. So great. So now we have a simple kind of perceptual scenario where we have some kind of perception sensation, which is described in a very vague and fuzzy way as it being that the floor is shiny. And then we have computed a posterior probability over the fact that the floor is wet or dry. OK. But the goal of our exercise here is not to compute a posterior probability distribution. The goal of our exercise is to kind of predict or model what we or what some the subject, we would say in more technical terms like the subject of this little mental experiment would perceive in this situation or what they will do if they had to act based on that kind of sensory information. OK. In order to arrive at this, typically when if you look at the floor, either you perceive it as being wet or you're perceiving as being dry. Don't perceive a probability distribution. So typically you would want to pick one of the two values and the same if you have to act based on that information, you probably have to commit to one of the alternatives, even though it does not strictly necessary. So what we are missing here is what is called a readout of the posterior probability distribution, which is a mapping from the posterior to what we call a report or a report. I'm going to maybe say a word about it in a second, the report on the state of the word, on the word. Essentially what I'm saying is that we're still missing a way of going from the posterior probability distribution to something like if you imagine this being an experiment where you have a subject and you ask them, so is the floor wet or dry and that's the report. The subject will report the floor being wet or being dry and so we need essentially a rule for going from the posterior probability distribution to that choice. So the simplest possible readout, which applies very well in this case, is the maximum posteriori readout, maximum posteriori, or map, which just says, OK, in this case you have a number of discrete options, just pick the one that has the highest posterior probability distribution, which is very intuitive. In this case, then the report under this particular readout would be that the floor is dry. So even though we perceive it as being shiny, in the end, the perception that corresponding to the dryer wet state is still that the floor is dry. And this is mostly because our inference process is dominated by the fact that we have a strong prior belief that typically floors are dry. OK? But of course, you can see how this answer could change depending on exactly how you set these numbers, right? So this is like the simplest possible type of report that you can think of. But I wanted to say that this can also be made a bit more complicated. For instance, if you ask the subject or if the subject is in a situation where they need to perform some more complicated choice based on the sensor information. So for instance, just to clarify, imagine that instead of being in this kind of an interview-ish like setting where you're just asking is the floor wet or dry, the floor now is the floor of a hallway that you're entering in. And you're rushing to a meeting. You're going to a meeting and you're rushing and you're late. And then you enter this hallway and you see the hallway shiny. And you just have to see, depending on whether you judge the floor to be wet or dry, you need to decide whether to keep rushing, maybe run across the hallway because you're late and you want to be there on time, or whether to slow down because the floor may be wet and your risk is slipping. In this case, this is an example where your action that is based on this sensor information has a potentially a strongly asymmetrical kind of utility landscape. So in one case, imagine that you commit to the idea that, OK, I think that the floor is dry. Maybe you do a reasoning like this. So the posterior probability of it being dry is higher. You say, OK, the floor is dry. I'm going to run through it because like this, I'm going to be in time for my meeting. So if you're right in that case, your reward is to be maybe not late for your meeting. If you're wrong, your punishment in that case is that you may slip, pull, and break some bones, OK, which is a fairly steep penalty. On the opposite, if you think that the floor is wet and you slow down and you just walk across it cautiously, if you are right, your reward is that, OK, maybe you have an accident, which is nice. And your punishment, if you don't get it right, is that you're going to be a bit more late to your meeting. So depending exactly on how important the meeting was, this is kind of a strongly asymmetrical situation, OK? Typically, yes, question. So may I just try to rephrase this in a more formal setting in this? So this readout, what you have in mind is actually that they are constructing an estimator of your random variable from the posterior. And that this estimator could be biased or not, depending on. Yeah, so the readout is more of a you can think of more of it as a decision rule. That what I'm saying is here is that you can, you're not necessarily estimating something. Yeah, I mean, you're not necessarily estimating something because you can also just, you know, what I was actually saying right now is that you can incorporate a concept of utility in your framework and say that, OK, now I'm going to my readout is going to be the thing that maximizes my reward, for instance, I think it maximizes or minimizes my cost, which you say, in expectation, according to my posterior probabilities, this is linked to Bayesian decision theory, basically. And so you're not necessarily estimating the value of the latent variable. You're maybe doing something which is a bit more complicated. Yeah, the reason I'm asking is that once you do that, in a sense, you're getting out of the Bayesian framework. This is something additional. Yeah, speaking, because you might say, OK, if I miss this, this meeting, I'm dead. OK, so I mean that you're putting all of your estimation into one variable, which means that basically, the observation don't matter any longer. Now, you don't care about the information you get from your observations because you will have to rush across no matter how it comes. Which, in a sense, is not the process of gathering information and just to make the distinction between these two steps. Yeah, I think yes. And I think that's actually, but yeah. So in a way, I think, if I understand what you're saying, is that it gets out of the Bayesian framework in the sense that it's not just about the inference anymore. There is more than the inference. But I think that's actually that's a crucial plus of the Bayesian framework, the fact that it allows us to plug directly a concept of utility of the decisions, because it gives you, because when you have a posterior over the possible options, then you can put whatever utility you want and just optimize for that. But you can do it because in the Bayesian framework, you can do it very naturally. That's actually one of the points by which this is OK, thank you. Thanks for the question. Thanks. Other question? And I can give you the. Yeah. It's like this. I mean, in some sense, we are still Bayesian in a sense that we can think that we're like, I guess, taking expectation of the utility with respect to the posterior predictive distribution. In some sense, they want to maximize that distribution. Yeah, yeah. So the action based on that. Yeah, yeah. So that's the thing. But I think what I was saying is that to do that, you need the notion of utility, then it comes from somewhere else. It's not just about. Yeah, yeah. Sure, sure. Other questions? OK, let me check the chat. Why are we using probabilities for representing sensation? So why are we using probabilities to represent sensation? So we're using probabilities to representations because actually we are not. So in this case, it depends how you look at it. So in the second step or inference process here, what we did was actually we started from the fact that sensation was, in our case, shiny. And that was a fact. OK, there was no probability attached to it. So that was just data. OK, but then, of course, when you look at it from the point of view of the first step, which is just building your forward story, your data story, the story of how the data can arise, you need to put probabilities in because you don't know. Like because you're assuming that the word is stochastic, you don't know what's going to happen. The floor can be shiny or not shiny in both cases. And it's just going to be a random process. I hope this is helping the person ask the question on the chat. OK, so let me check the time. OK, so we have about half an hour. So OK, so let me just, one second. OK, so this was just a very simple example where perhaps a bit hand-wavy in the sense that these ideas are wet and dry and shiny or not shiny are kind of very high level concepts. Let's start to see if we can do something a bit more, a slightly more quantitative. So let's look at the case. Let's essentially try to apply the same procedure divided in steps. So the idea of building a forward model of the data and then assuming that the subject is kind of inverting that forward model for the data in order to realize a percept to a case where we're trying to estimate a continuous world state variable. So let me make it concrete. So we're going to use this running example, which is as follows. It's a sound localization task. So imagine that you're in the lab and you have a subject with my amazing drawing skills I'm going to represent with this head. This is a head, it's a nose and ears. And so the subject is sitting in front of a screen, like a panel, a curved panel. And behind this panel, there's an array of loud speakers. OK, these are loud speakers. And so at some point, so the subject doesn't see the speakers, doesn't see anything. And at some point, one of the loud speakers will emit a sound. OK, it will emit a sound. And the idea is that the subject, so this loud speaker will be located at a certain angle that we're going to call S for stimulus. This angle here is going to be our stimulus. And the task of the subject is essentially to point at the location that is supposed to be the hand of the subject. So it's supposed to point at the location where the sound is coming from. So there's the idea that there is a stimulus, which is this continuous location. It could be anywhere. There is a sound. And the subject needs to point at where it is. So we're going to approach this problem using the same method as before. But we're going to have to make it a bit more quantitative. So remember, in the example we made just a second ago, we started from the sensation of shininess. Here we want to quantify it a bit better. So we want to identify what would be the measurement in this case. The measurement would be the equivalent of shiny. So the raw data that the subject has access to. So in this case, we're going to assume that, and for the rest of the lectures, we're going to assume that this measurement is some sort of abstract neural representation of the sensory stimulus. Let me write it down, and then I'm going to say something. Abstract neural representation of the sensory stimulus. So the idea is that for pretty much any sensory stimulus, if you put a subject in a certain situation, like this one, and you say, for instance, imagine repeating the same stimulus over and over again, this thing deep from the same loudspeaker, you can imagine that the pattern of neural activity elicited in the subject's brain from the stimulus will change from repeat to repeat. It will be different. And there's multiple reasons why the same stimulus could have essentially variable neural representations. One level of justification for this is the fact that biology is noisy. So neural activity is stochastic. There is noise in sensory transduction. There is noise in synaptic transmission. There is noise in the operation of the ion channels that make the neurons fire. Fine. Then there is a higher level where there is, say, ongoing activity in the subject's brain. Even if you repeat the same stimulus over and over again, the state of the brain of the subject will be different from moment to moment, because there will be fluctuations in brain state. There will be fluctuations in top-down processes, say, attention, arousal, whatever the subject is thinking about, what they're going to have for dinner. And of course, it's going to affect somehow the representation of the stimulus. And third, you can also think that a given sensory stimulus actually can be represented in a myriad ways, even just from a sensory perspective. So imagine that, for instance, here I could make a sensory task where I ask the subject to report the location, but then I can change, say, for instance, the pitch of the sound, right, from trial to trial. So I can have sounds that come from the same location that have different pitch. So that would be the same location, so it will be the same stimulus in the sense if we think that the stimulus is just the location, but the sound is very different. So of course, it's going to have a different representation. And if you think about it, for instance, in the case that we're talking about the beginning of, say, the identity of the friend, in that case, for instance, the stimulus was the identity of the person, but you can have the same stimulus. So you can see the same person in a myriad possible ways. The person be far, could be near, could be up, down, dressed in a way, just in some other way. So those are all different ways in which the same stimulus can present itself. So what was this to say that, for the sake of argument, allow me to say that there will be a distribution of kind of possible neural representations associated to one particular value of the stimulus. OK, does it make sense? OK, right. So OK, so and as a further simplifying assumption, allow me to make the, so this measurement, we're going to call it x, and allow me to just say that x will be, for our purposes, it will be a very simple kind of random variable that will be normally distributed, one dimensional, essentially one dimensional Gaussian, normally distributed around some value with a certain standard deviation. Yes? Yeah, yes. Forget it. OK, so you just said that the measurement is one of many neural representations that correspond to a single stimulus. Yeah. But also the converse is true is that different sensory stimulus could be represented by a single abstract neural representation. Sure, sure. Many to one also. Yeah, yeah, sure. I mean, you could have, yeah, absolutely. I mean, you can think about, you know, stimuli that have different locations could end up having the same neural representation. Or even the same, because what you're aiming at now is to identify this x as an angle from which the sign. And then eventually it doesn't matter whether this is produced by with a high pitch or low pitch. So that's still, in this process, there's also compression, which is already oriented to some sort of goal of the measurement you're making, right? So or attention you pay to certain aspects of this, because the stimulus, I just want to understand exactly. So the sensory stimulus could be everything, right? So, yeah, yeah. Including all very, it is very high dimensional. Yeah, yeah, very, yeah. And then you have your measurement, which is a result of, which could be ambiguous with respect to the stimulus at the same time could perform a strong compression to a very low dimensional signal. Yeah. Compression also in some sort, already brings in the reason why you're looking into that. Yeah, yeah, so I think there's, thanks, that's a very good question. Let's see if I can, so thank you. So I think there's two levels here to this question and perhaps to the answer. One is that, so if you think about concretely, if you think about this x being, say, really the representation somewhere in the brain activity in particular bunch of neurons of this angle, then yeah, you can think of, you may be thinking of a, say, for instance, a brain region which is specialized for encoding that thing. And so there's going to be compression and that there's going to be circuitry that throws away information that is irrelevant. And so there's going to be some compression there. There's also the kind of another level here, which is that the weight, like in order for this, so this framework that we're going to see in a second here can already be very useful. Even if you just assume that this x here is some very, very kind of, like when I say abstract, I really mean it. Like it's something that is very, like and even I'm going to be using a one-dimensional representation. So this obviously is very far from anything that you could possibly measure in the brain, right? So here I'm making a very strong statement like saying that you can, what can I say? So in a sense, I'm not saying that the brain itself is already necessarily making this compression. I'm saying that if, that this can be an effective representation of what is going on. It's not directly a physical. So you can think of these say x as saying, oh yeah, the activity of a population of neurons. I look for the first principle component in this condition and I project. And then I get the one-dimensional representation and maybe the variance is Gaussian because of the central limit here and there are lots of sources of variability and maybe this is an approximation that it makes sense. So there's one level at which that kind of can work. But there's also a level in which this is kind of, you can use this exercise to show that this kind of works as just a very abstract kind of depiction of what is going on, which is not necessarily tightly linked with the activity of any specific neural population. I don't know if that, absolutely. Thanks, thanks. Other questions? Do we know how compression is made at brain level? And I don't know if I get correctly the question before. Our knowledge that we have to search for in auditory stimuli bias our perception or not because we are focused maybe on the auditory aspect of the stimuli. Okay, I'm gonna answer the first question and the second one I'm gonna ask you to repeat because I'm not sure. So the first question is you said how does compression happen, right? So to answer that question, you have to look at specific circuits, right? Specific areas that are related to specific things. So you can have specific areas of the brain that are kind of specialized for encoding particular aspects that are useful about the work. And essentially, you can make the argument that over either evolutionary or over developmental time scales, certain, at least certain circuits in the brain have adapted to the statistics of the environment in such a way that allows them to be efficient at encoding their typical stimuli. We know from information theory that in order to effectively compress something you need to know something about the statistics of what you want to compress. And so, and there is evidence that at least in some, especially in the sensory brief rate you can find evidence of this happening. But this is kind of very tangential to this is not, yeah. And then the second question was, can you repeat the second question? Yes, if I'm asked to perceive in auditory stimuli, does this in a way like influence my perception? Reducing like a reduction of the stimuli is more than one dimension, more than auditory dimension. I mean, as to look at one dimension does it influence the perception I get of a more dimensional object? Yeah, for sure. I mean, the attentional processes will are important. So essentially what you're trying to do during a task will be very important for determining even. You can even see it as a, of course, I mean, you can see it at the level of performance. Like of course, like subjects will perform differently if they're instructed to do something, to pay attention to something whether they're instructed to pay attention to something else, but you can even see that the level of even low level neural representations, you can see for instance that even like the presence of like attention can alter the statistical structure of neural population codes. You can see that I can leave you references. So you can, thank you. But yeah, but this is a bit, so here for our intention, this X is just the one big Gaussian and it's a very simple representation here. And so we're going to say that this, so then we're gonna say that P of X, given some stimulus is going to be some Gaussian, so two pi sigma squared E to the minus X minus S squared over two sigma squared, okay? So note that because we have a very, because of our simplifying assumptions, because we have a one dimensional stimulus and then we chose to represent our measurement as a one dimensional variable, we can actually afford to put X and S in the same space so we can pretend that they live in the same space, which is not necessarily always the case, but it makes things easy for us, okay? And the other ingredient that we need here is that we also assume that the way in which the task is set up, there's gonna be a probability distribution of stimuli. So the stimuli themselves are going to be, say, sampled randomly from trial to trial and they're also going to be, again, for convenience, this is actually something that we can decide as experimenters if you want, it is also gonna be given by a Gaussian, right? So we're going to say that this is one, two pi sigma, we're gonna call it sigma S squared E to the minus S minus mu squared, mu is the center of the distribution of the stimuli, okay? So now let me see, yeah, great. We have a kind that we need to do the next part. So having set up this problem, we're going to see how our Bayesian model in framework applies in this case. And we're gonna have to make a little expand our drawing because there is a multiplication of modeling levels going on now. Let me see if I can. Okay, so let me reproduce the experiment here. So this is the screen. Let me not make the little loud speakers because you know they're there. This is the subject, okay, subject. Okay, so in the experiment, we have a certain stimulus S that we're going to represent here as a little node in a little probabilistic graph. And on every trial, what will happen if you look at the subject from the outside as we are as experimenters, you're going to see that this particular stimulus will correspond to some report. So the subject will actually point at some location on the screen and we're gonna call that S hat. So that's the report, okay? Now, so this is us, this is the experimenter. The way they're gonna go about this is, so we're going to employ our Bayesian modeling scheme and we're going to say that, let me just let's see if I can do this, right? So, let me just, okay. So we're going to say that in a, sorry, I need to move this, of course I didn't do that very well. Let me just put S here and S hat here, okay? So the way in which we as experimenters are modeling the experiment is that we're saying that to some specific stimulus that we know and control, there is some measurement X, okay? Which is what we have defined over there and this is something that we assume, okay? And then we're saying that the subject based on this measurement will perform some report and the way in which this arrow is implemented, the way in which the subject will go from the measurement which is the data for the subject to the report which is the inference, the output of the inference for the subject is by doing Bayesian inference, right? Bayesian base, okay? Now, in order to fully model this, we need to add, so this is just, I just made a little sketch representing what I have already written here. So that there is some assumption about how we think that there is some measurement that emerges and then I'm just, I just added this arrow here that kind of represents what we want to do, right? The point is that we need to explain exactly how this arrow is computed and in order to do this, we need to add one further level of modeling which is that we're saying that, so the subject itself has an internal model of the task, okay? Where, and I am sorry for the recursiveness of this drawing. So the subject itself has some notion of the task and it has some notion of how some hypothesized stimulus S or S hype gives rise to the measurement, okay? So the model, the subject has an internal model of how the measurement arises and uses this internal model to compute this arrow here to compute, to go from the actual value of the measurement to the report, okay? So this is just essentially trying to formalize a little bit better what we're talking about also before the idea that in the way that we model the whole thing we need to assume that the subject is performing Bayesian inference but in order to perform Bayesian inference the subject needs to have a forward model of the task themselves, okay? So there's like two levels of modeling. And so, you know, and just to clarify a little bit so the things that are out here in white are real. So there are things that we can actually see in the experiment. The stimulus is something that is accessible to the experimenter, but not to the subject. The subject doesn't know what the true stimulus is. The report is also real is something that there's a subject effects and that the experimenter records. So we have this. Then we have the measurement which is something that the subject has access to at least in our theory but the experimenter doesn't have access to the experimenter can model the existence but it doesn't, it's something that is happening inside the head of the subject, right? And which is then used to compute the report. Okay, any questions about this? I mean, I'm gonna think I'm gonna, I see a few minutes to explain this a bit better. So maybe I can, sorry. Yes. I want to discuss our role of measurement in the perception. It is a model of a human perception or it is a perception in general. I mean, for example, do we have measurement in animals? So this is a model of perception in general. You can make exactly the same argument for even like there's nothing special about humans. So my question is, is perception possible in the world without measurement? Can we percept if we don't measure? I mean, if we assume there is not any measurement, still this perception possible? What do you mean no measurement? Can you explain? I mean this definition. We don't have any abstract neural representation. But at what level? Like, are you saying like, what do you mean there is no abstract? You mean like, you're saying like, are you talking about the theory? Like, is it possible to, like, are you saying what happens if I take away one of the assumptions of the theory? Or if? No, I mean in this model. Yeah, okay. Can we, I mean, does this measurement play essentially our world in this diagram or without measurement, we will still have perception? I mean, for this model to work, you need to have the measurement because that's kind of what is based on. Because the idea is that you're looking at perception as an inference procedure and the measurement is your data. So it's your starting point for the inference. So you need the data to perform inference. But in a human perception, this measurement makes sense to me. But if I imagine that there's a rat or mice, I can't sense the measurement, the role of measurement in this perception. Maybe we are as a human for the simplification of the perception at this parameter, but it's not a real parameter. So here, when I was talking about, when I was introducing the measurement, I was talking about it as a kind of an abstraction of the set of neural activities that are elicited by some stimulus, right? You know, I play a sound and some neurons in your auditory cortex go off in five, boom, boom, boom, okay? So the measurement here is an abstraction of that thing. So I don't see how an animal is different from that perspective. I mean, animals also have brains and like their neurons, they respond to sensory stimuli. But this activation is a mechanism of inference, isn't it? No, so here what I'm saying is that there is a level at which there is a low level kind of sensory representation of the stimulus. And then what you perceive is the output of a computation that takes this representation as an input. So it's like the perception would be, or decision would be like something that gets computed later. So back to my first question, that is there any boundary between inside and outside? This measurement happened outside or inside? I mean, the measurement is, so okay. So there is no, in terms of like physiologically, like in terms, it's hard to say, again, this is a very much of an abstraction, right? So it's hard to pinpoint exactly to say this is a measurement, this is not the goal of this framework. This framework sits at the, I really more like at the cognitive level, right? Cognitive and behavioral. So we're not really trying to link this specifically to saying, oh, the activity of this neuron represents a measurement and the activity of this neuron represents a percept. If that's what you're thinking of, right? So it doesn't really try to do that here. It's just saying that it is useful to think of this perceptual process as divided kind of in two stages. Does it help? I have a question about this cloud inside of a cloud. Sorry. I never get the geometry right. I'm not a neuroscientist. So I apologize if the question may be naive or... So the thing that is to me looks, just tell me if I get it right. Like it looks special to me about neuroscience that for example, let's say I want to do a gene genomics. I study gene expression, that kind of stuff. I get noisy measurement, want to do Bayesian inference. So in that case, I have my prior, which is my idea of what is the generative process. Here as an experimenter, I'm the neuroscientist. I have a generative process, which is how I make the signals. But I'm modeling an inference process because I'm modeling someone that has its own idea of what is generative processes, no? Yeah, yeah. This is the big difference I think with neuroscience and other sciences where you do Bayesian, maybe posterior inference. Yeah, you have it exactly right. So essentially like just to kind of repeat because I think it is a very interesting and important point. Like here, we're not trying to do just Bayesian inference or Bayesian data analysis based on something. It's we're trying to model a system where we think that the ideas of say Bayesian inference may be useful for the functioning of the system itself. So we're modeling an inference process. We're modeling a system that performs an inference process. And in fact, you can think, so for instance, we're gonna build the model of this thing, right? We're gonna build it like what we're doing here we're building a model of this inference process. And in the end, this model of the inference process will have some free parameters, right? And so for instance, these signals that I've written here, this sigma here is the sigma of the neural noise basically, right? You don't know what that is. So it's gonna be like at the end there's gonna be a free parameter in your model. And what are you gonna do? You're gonna have to infer it, right? So at that point, for instance, if you want you can do Bayesian inference on your Bayesian of your model of Bayesian inference. That's something you can do it's entirely separate, okay? Yeah, thank you. Thanks. I think we're almost out of time. From the distribution of p of x given s doesn't mean that it can be a non-zero measurement even in the case of a zero stimulus s equals zero. Yes, so yes. So in this case, actually the measurement here takes even negative, like both measurements and for just the way I drew it, the both the stimulus and the measurement we'll show it at the camera later. They can both take, for instance, negative values. So this is another way in which this x is a very abstract representation of neural activity. So they can go, they can be zero positive negative. And because of noise, you can have that s is zero which would be here in front and then x could be positive or could be negative. Yeah, so, okay. So maybe let me see before closing. Let me see if I can, okay. Let me just give me like five more minutes. I'm just gonna write down the solution for the inference problem of this case here so that tomorrow we don't know Wednesday, Wednesday we have something to start from. So, okay. So basically remember, essentially we are saying that so our subject will perform Bayesian inference based on some internal models. So we're going to assume two things. One is that the subject, so let me write it down because it's important. So we're going to assume two things. One is that the subject has an accurate representation of the statistics of the environment. So essentially saying that the probability of the hypothesized stimulus in the head of the subject here is equal to the true prior that we have built into the experiment. This is an assumption we make which can be relaxed but in the simplest case is that the subject actually has a good idea of what is the prior for the stimuli. And the second one is that the internal model that the subject has for how their own measurement arises matches our own idea for how this measurement arises in the head of the subject which means that essentially P of X given S hype is equal to P of X given S. So what I'm saying with these two assumptions is that I'm saying that this little model here is the same as this model here. So we're saying to keep things simple, you can always complicate things as much as you want but to keep things simple, we're saying that the experimenter and the subject agree on what is the kind of underlying process that generates the measurement. And also maybe on top of that, we also assume that this model is actually well-specified in the sense that this is actually what's going on. Okay, and now, so armed with this, the subject can now perform based on inference based on those steps that we said before. So it can build a generative model for the measurement and the generative model for the measurement is just essentially is this, right? It's just saying, oh, there is a prior over stimuli and then there's gonna be a conditional probability for the measurement given any possible value of the stimulus. And then there's gonna be an inference step where we're gonna have that the, so the inference step, so inference. So the probability of say S given X is going to be equal to a probability of X given S times the probability of S over the integral in the S probability of X, let's call it S prime, S prime probability of S prime. Okay, so this is going to be applying Bayes rule. Here we have a continuous case, but it doesn't matter. Now, interestingly for the scenario that we have, thanks to all the simplifications we have made, we have that the probability of X given S times the probability of S is equal to, let me get the notation right, is equal to, we can write it as you can essentially, so this is just the product of these two Gaussians, right? This product here, which is the thing that goes at the numerator in the posterior is just the product of these two Gaussians. If you just multiply them and then you can, I'll collect these terms, I can collect. You can get this, I can write this as e to the z, e to the minus, and I'm gonna write it as S minus mu p squared over two sigma p squared, where mu p is equal to X over sigma squared plus mu over sigma S squared over one over sigma squared plus one over sigma S squared and one over sigma p squared is equal to one over sigma squared plus one over sigma S squared. And also, so this and also z does not not depend on S, okay? So you can rearrange this thing and you can express it like this, I promise. You can try. So the advantage of this is that, and I also promise that I'm almost done, the advantage of this, where can I go? Here, if I can delete my masterwork is that, so if you just plug this expression into the posterior here, you see that it becomes p of S given X equal to e to the z X of minus S minus mu p squared over two sigma p squared over the integral over the S prime of e to the z X minus S prime minus mu p squared over two sigma p squared. But now, because this z does not depend on S, it simplifies and therefore we have that this posterior probability distribution over S as the form of, so this is an unnormalized Gaussian in S, okay? Over some normalization factor, but because this has to be a probability distribution, this is the normalization factor, of course it's just the normalization factor of the Gaussian and you can actually read it directly. So this becomes one over two pi sigma b squared e to the minus S minus mu p squared over two sigma p squared, okay? So what we have is that the posterior in this case is again, a Gaussian probability distribution centered around the location, which is essentially a weighted average of the location of the measurement and the location of the center of the prior. And the weight of these weighted averaging is given by a notion of essentially a notion of precision. So basically, the more confident we are in our measurement, the smaller the variance of the measurement is, the heavier the measurement is in this weighted average. And vice versa, the prior, the weight of the prior in this weighted average is proportional to the precision of the prior distribution. Basically, if we have a strong belief that all stimuli should come from one particular position, then is kind of the effect that the measurement can have on that prior is very limited, okay? So there is this notion by which the prior that we have over stimuli pulls the posterior away from the location of the measurement. So in this case, we can see directly that our kind of more quantitatively than what we had before, that our kind of Bayesian inference scheme is allowing us to incorporate directly our kind of preexisting notions of where the stimuli should be into our perception scheme. And so this is, I think we're already, okay, we're already kind of 10 minutes above limits. So I think we should probably stop here. I mean, if you have questions, yeah. Thanks. That's like good moral. One second, okay. So now we're gonna have a coffee break upstairs. That's the area for the school. And I recall you that at six p.m. we're gonna have a reception again upstairs, okay? So Jenny will promise that we'll be around for the coffee break. So if you wanna address questions to him privately, you can do that. Thank you.