 Okay, so we may start the seminar. So today we have Dana Barat. He's a great professor, especially specializing in cognitive science and the brain science and the modeling. And yeah, especially I remember his very great work about visual attention things. I learned a lot, you know, like maybe five years ago or something like that. And then he's also famous for the, right? Like a very early work in like a 10 years ago, like pretty coding. So that's a very famous work. But anyway, so today he's going to talk about generative model things. Yes, please. Okay, well, thank you Sam. I feel like Kenji and Jo and I wrote, I'm from reading their papers and then I sort of know them for lots of years. But it's fun being here. This is my first time here. It's sort of an adventure to me. Driving on the left side of the road is an adventure too. And but today I want to talk about a code for neurons. So we've heard before that it's 100 billion neurons in there and they're computing by spikes. But in there and for systems, neuroscience particularly experimenters take a lot of spikes and average them. And I feel like this is a big mistake. The brain can't be doing that. And so it's okay for as a correlate of what's going on. But it's not a generative model. It's not what the brain is doing. And I think that what the brain is doing is open question. So today I have a proposal for that. And all the brain power here is if you want, you can think about things that come up and maybe you'll have some mind ears for me. That'd be totally fabulous. Cause I haven't solved all the problems here, but I have some data for you that might be a lot of fun. So let's get going. So we're getting interested in now that the brain uses voltage oscillations in its communications and in computer calculations. And so, and theta is big, I had a flight from Dallas to Narita, one stop. And I could tell you what happened in the plane. I could right now I could do it. So the theta is for things that need their own spatial and time things. And another one is beta. So beta is very important and people are starting to really getting excited about that but that it works in reachable space. So if I wanna step or a secod in that vision, I need my beta. And so it kind of sets up the questions that you're gonna ask for. So, and then the star of today is gamma. So it's being an old hardened computers by a guy that still does the heavy lifting for the detailed computations that you wanna do. Okay, so let's, the other thing of course is for vision, everybody in vision sees that the primates have the tiny little fovea of a width one degree at the thumb held that one's a family. And so the basically our visual day is making saccades and changing our gaze to ask questions about the periphery. So we'll need that. And then another things that's happening is that people are thinking that the brain is has an agenda. So that your mental life is that you have questions and goals that you want. And so you have your question ready to go and then you ask the motor system and the visual system to answer that question. And then that is a new idea in less maybe 10 years or so because when I started out, you view the vision when as objects on the table and vision was to make you a description of what's there and then you can think about it. But now that's gone, something else is happening. So let's have a take a look at some behavior here. So in this, we're in a virtual world and we're gonna drive that course and the crosshairs, that's the eye, the gaze where you're looking and you have two things to do. A red car will appear and you have to follow the red car and but you have to keep your, oh, I got my, I can defend with my, so you got your speedometer here and you have to keep the speed at 30. And so let's see if I can get this for you. We're going for here, nope, nope, nope, nope. And here we go, okay, here we go, sorry. All right, so let's see, where's the car? There it is, let's wait for it. And here we're, okay, there it is. Okay, car, car and speed, whoop, car, speed. I like that one, now what's my space, my special moment is gonna come up soon. So here we go, so we're gonna change last, there. So your visual system knows that if you're changing lanes, you better look in the mirror. There's nothing there, but the brain knows that that's what you should be doing. And so you go, I think, anyway, that's, so if you were, this has points that was, so we'll need, we'll remember that because we'll have to go back, come back to it. Okay, so let's do some vision. And I apologize, because I think lots of people in the audience have seen this hundreds of times, but it'll become important too, and we know that when a vision hits the retina, that LVN is the, it's like in the thalamus, it's the IO port, and so that is a way stage. And then we go into the memory, and the first part of the memory is V1. And of course we have the feedback to from the V1 to the thalamus. And this is, I always like to think about the bus coming from V1 to the thalamus is 10 times the bus going the other way. So then this is, in my mind, this is your question. So you want to ask your question and vision, the thalamus is the answer, gives you the answer. And so you ask it on this bus. Now, the other thing, of course, everybody knows is that the thalamus uses dots. So you have dark centers or light centers, and then you have the code for your V1. So the basically your cortex is keeping your memory. So basically you're a memory, and the computer is so, brain computation is so slow that you basically try to compute everything ahead of times and then look it up. And so if you're doing that, coding the information is really important. That has to be your life goal. But here we're gonna code the dots into edges here. Okay, and of course, if I'm gonna take you 60 years ago, when you take a wire extra cell your electrode and let's just see a dot. Here we go. I want this to be very loud. And here we go. So this is Hubel and Weasel recording a dot in a cap. But why are we listening this? Because you're listening to spikes. And then if you're a neuroscience experimenter, what you're gonna do is count those spikes. And that's a number. So the cells are trying to tell you about numbers. And if you want those numbers, you have to spike the, take the spikes. And so you have trials. So you're sometimes 40, 50, 100 trials. And then you sum those things in average, which is okay. So this is, you know, I don't like this method because it seems like biology is not gonna do this. Maybe the experiments, but we can ask this. Okay, if we're estimating a number, okay. How many spikes do we need? It turns out that it's expensive. So this is, I think this, I think I did this, but maybe Kenji did this before for me. But what we're gonna do is we're gonna take, we're trying, what are we trying to do? We're trying, we have a cell and it's firing. It's, you have the, the spikes are coming out. And we wanna know, we want the number, okay. So what we'll do is we'll, we're gonna take the little places spikes and we're gonna put them in a box. I'm gonna get a space time box and it's a Bernoulli box. And so it's the probability of getting your spike. So I've chose box. And so that box has one, either zero or one. And but that's, but what that means it's really easy to count because what do I wanna do? I wanna know how accurate it is. So I do, do I wanna be very close? So my error is down to 2.5% and then, so the other thing parameter I have to play with is I wanna know, here is the number, the scalar I want. And here's the one I got by averaging. And I want these very close. And so I wanna know how many boxes do I have to have my errors that I want? And the answer is it's a lot. So if I want to be point, I want this to be very high, 0.9. And then I want to be accurate. Then I need a lot of these. You can do this here and you've probably done this already. But you need a lot of boxes here. And the boxes could be cells or it could be synapses. I don't care how you're gonna do this. But it's a lot and just remember now what we, this was one scalar that we were trying to do. If I have a 10 by 10 image, I'm sorry, there's no room anymore for anything. So something else has to happen. So here, let's have a look. Okay, so we need to do some anatomy first. And so before we learned it was dots and then edges and everybody knows now that the anatomy is very riddle and faithful here. So if I have an edge here and then make one with anatomy, oh, well, I just take a dot and a dot and a dot and dot. I just connected it if I want the middle, I take the right dots and do, do, do, do base. And so that happens, but we're computer guys and so we want to do better than that. So we would like to know how we're looking at edges and we have this very, very work with O'Shaughes in the field and what they're saying is that, well, maybe I could quick one. How many people have seen this before? Okay, but let's, okay. So let's, let's, here's an image in the phthalomus. So this is dots here and I'd like to code it in B1. And so what this, here's the image and I make a big vector out of it. And here's my matrix and the columns of the matrices are these little images here. So, and what this says when I want, when I've done this, the image can be a linear combination of these little cells in here. And, but you can see that since I wanted the exact match, these are very ugly things. So I'm taking a linear combination all of these, but they don't look like a biology at all. But this looks like a biology that you would see from experiment in measuring. And what did O'Shaughes in the field say? They said, well, look, we don't want to use all the, we want to code. So we don't want it to be as big as the phthalomus in the B1, I want to be much shorter. And so what you want to model the image, but you're going to charge in your a vector function, you're going to charge for the cells that you turn on. So you would like to have a code that uses less than the 64 degrees of freedom. So you want to be much less than that. And so you can do that. And so this is my ugly picture, but this is pictures important. So I would try to hope you remember this fit because we'll need it. So here's the little 64 points in the phthalomus. And now in the B1, we only need one, 12 because we have these ones, one, two, 12. And so when I'm done with the optimize, I just kind of take the linear combination of this. So this times this plus this times this plus everybody's seen this before. So basically in the coding, I get a nice code with less cells and I take a linear combination of them. Okay. And this works because the B1 has many, many, many cells than I need. So I have a lot of little cells in there. So I could for particular, it's easy for when a particular image comes in to go in here and pick the just ones that are most helpful. Okay. Now, the other thing we should talk about, that'll be important too, is predictive coding. So I had a very brilliant student Raj Rao and we were working on this and he realized that you could play with the mathematics that Oshelson came up and put it in a very, very, very context. And so here what's happening is that, what were we seeing? We were, before we had this image and we're coding it to it. And what now what's happening is that it's pretty coding. So before we were worrying about, okay, this error and we're trying to code this and code this, but here what we're saying is, wait a second, we in this level, we can have a predictive estimator and that is going to be trying to get a prior model of this. So basically the predictive estimator makes a prediction of what it thinks is coding and if there's an error and the level one level down, it just has to set the mismatches. And so one more and then the classic paper, this paper gets more sightings every year and it's real because it's such a beautiful idea. And so this is the, just go back in history in time and look at it, okay, but we wanna know what, what is the brain doing? Okay, that's why you're here. No, no, no, no, not at all. But you'll see some receptive fields and then you'll be asked to make some judgements on there. And in fact one of, so I'm having trouble interpreting my receptive fields and so I'm hoping that somebody will come up with some ideas. Okay, so let's go, okay. So these are my heroes that Marcia, I like him a lot. He's great, I read, I was reading his book and his book says, look, if the input to a cell all comes at the same time that it's easy to generate a spike. I thought, wow, that's amazing. And why aren't all the cells waiting for the input at the same time? And so it turns out more and more in the experiment world, that's what happened. All the input comes at the same time. But I was really, that changed my life reading this book and I thought, it's real. I thought I'm not going to Israel. I've very few, I have two principles. One is no, no more military research money. But you can't get your version into your back and since 82. And then, but I thought, I'm not going to Israel till they solve the two state problem. And then, but Marcia had his 80th birthday. What am I, I thought, what am I, I'm gonna give Mrs. Birthday? No, so I went, he's really a great guy. So, but this is groundbreaking. And also, Wilther Wolfsinger, he is a genius really in theoretical world. And he basically, he found that large areas in the maps and then in the cortex would also, and his work, I feel bad for him because that in the US they didn't give him the property of, or of awards for this work. It's fab, totally fabulous. And then maybe people load Pascal Fritz. He's really nice too. And he's a disciple of Wolf. And he something does things very close to what we're doing is that he studies gamma oscillations in networks, but the Betty, he has global phase. So if you have two oscillates, you can tape when you, you can make the, move the phase, but you have to do it in the entire network. That's the difference. So these are my heroes and now we get to us. So the, what makes this work you're about to hear is that the gamma free frequencies have a local phase. So each spike has its own private space. So here we go. Okay. So what we have to do is get some data from patch planting. So, okay, once more quick, patch clamping, oh, it's really nice. So basically, if you have, if you have a wire electric, you have to sort of poke it, stick it in there. But here you have a glass, a little electrode and you have to sneak it up on the soma and put it on there without breaking something there. But if you get this stuck on there, then you can look at the entire memory potential of the cell. So before you might have been counting spikes or da, da, da, da, da. No, no, no. It's very, very different. So let's, so here's a patch clamp and here's a mouse and the mouse is going to see oriented stimuli and for seven and six, seven, seven, seven, seven. Seconds and here's the middle three and oh, four. Okay. I always like this is, you know, to talk the, take the avatars of those spikes and see what you get. But let's not look at that. That's look at the patch clamp. Well, so, wow. No, and I said, I was so happy when I saw this. It's like looking at a musical score. There's all kinds of stuff on here. There's low frequencies. There are these frequencies, I don't know. Lots of different frequencies are on there. Wow, what is, what is going on? So, hmm, well, we gamma, if we look after gamma, we could just take a band in gamma band and just figure we could filter this. But no, you can't because this spike, so screw-out thing, this huge spike is going to make, take that out. So basically, the people who do this, they fight over, they have to surgically remove this spike and then do the filtering and then they put it back, put it back. And then so people fight, they fight over what the sub- and check the little equation here. But, so we're going to do this, we'll take it off and we'll filter the gamma and we'll put it back on. Wow, another wow. So here, and it's so, the amplitude here is so tiny that it goes on the top. But look at that, right? The spikes have something to do with the gamma. This is a tiny little signal, but they're somehow controlling the spike. How did they do that? Well, hmm, so now, so here is our model. And I have another, my spouse is a professor too and she says, I have too many smart students, much more than on my share. But one of them is, is Rohan Zhang. And he helped me figure, he, we fought over what this should look like and I had to give in to his idea. And so what, what are we trying to do? We want a spike to be a scalar number. And so what is there? Well, it's a delta. And so basically, this is what we measured. The difference between this, okay, first of all, so we have this minima and this is the other side of the gamma here. And so what we're going to do, this is the origin zero and then here's the spike. And then we measure this and that's the number. And then we have to scale it because if it's right on the minimum, then it's a big number and then it falls off very quickly. Here, okay. And here is, don't you think this is pretty? I wrote this code. And so here's one. So here's a very nice one here. So this is sort of a, not a very big number, but, okay. So now you hear, we've done about 3,000 of these spikes and here's one, here's a higher number where the spike is very close to the minimum. This is sort of the middle and this is a low number here. So now a spike is, can we enjoy the moment? Because the spike now is a number. It's your back in the Silicon world where you can build your cells that have up a scalar out. Okay, so what are we going to do now? Well, let's look at some data just to get there. So here is a cell and the color is the trial. So when we, the first one we saw had four spikes. So here there, one, two, three, four. And so the color is how many spikes for the trial and you're seeing is since they're about a radian so they're in the first quadrant there. And here, now this is very important here is that what you can do is you can get the wavelength around this spike. So now you have a, I call it the instance frequency. That's probably a bad thing to say, but you can see here that if we take these wavelengths there's sort of varying all over the place here and maybe in an interesting way. And let's see. And then, and here's some more properties for you. And here's the one, the radian for the, okay, for our delta. And then this is the amazing thing for me is that the gamma, the amplitude is so tiny. The tiny little amp thing seems to be controlling this spike. And here, of course, we were expecting this that the frequencies would be in the 450 hertz. Okay. So now, what are we gonna do with this? We'd have, we need to, we had these numbers. What are they for? So we've done nothing. Well, I'm showing you this to see if you like it or not. One thing you can do is this is rather inelgant. But if you have an image here, 14 by 14 and here, what you could do have a very cheap coding is you can, you have your library of cells and basically each one of those is a vector. And here's your image is a vector. And so you can just take one of these cells and project it onto there. And so if you just, and you can choose those, you can choose 50 of these at randomly. If you just choose them and get their projections on your input, you've got a code. So shall I say that one more time? Here's the image and each one of these is a vector. And I just, I pick these at random and then I project them on there and take the 50 that I got. And that gives me something like this, not too bad. And now, but then I can take the, here's the data. So if I have all these thousands of spikes and all the little deltos, I can make a histogram. And now this is from the computer. And this is a computer models that takes out images from, and so Adam's take a little windows and coats them and then once you have the model, then you can look at all their projections and make it a histogram for that. So this is the histogram of the projections for the model and this is the ones just ripped out of the data. So then, for this one, this is close enough that it makes my heart meat bring you faster. And now I did, I should have used a kale to compare them and I didn't. And I was stupid, what can I say? But this is a lot of fun, don't you think? Okay, so here we go. Now, here's where your question comes into play. So when you have this model, you can play with it. So you, so here's three cells. We have about eight cells both together. And so you can take these spikes. You can count the spikes if you want and the input has all these different orientations. And so here's one of the cells. Here's another cell, another cell. Okay, no news here. But here, this is when I would need help, is if you're supposed to look at these very fixated and then, and real and realize that this is the mass, the max delta. And what if this one is, this is the same as that and this is the same as that and this is the same as that. So for me, I want to jump for joy, but I don't have an elegant reason why they should be matched there. Maybe if you're just drawing on this, you'll get lucky or something. But at least what didn't happen is that if the deltas had no structure to them, then these would be circles here. There'd be nothing here. So the fact that there's some things there is good, it means that it's a homework problem for all of us. Okay, so, and then, you wanna say chime? Okay, we have to, okay. All right, I think we can do this. Okay, now, we're gonna enter to the most important part of the talk. So in the car, when you saw the person driving and watching these pedometer in the car, da, da, da, da, da, da, da, da. And what we can do is take a look at, this is where you look. Here's all the speedometers, here's the car there. And the question before you is that what's going on here? Was, did the brain have two processes there that were both running and then he was the brain flipping back between one and the other, but they were active at the same time? Or were they somehow, if maybe the car did something and went to sleep and the speedometer came up and so they were training on, they were training on, or were they parallel on the same? Okay, so we wanna think about if they could be the same. Now, this is a Utah area, array for, they can get about 100 cells. And so what you can do in those current cells, this is a fabulous paper, really fabulous. I know people, it's really, what, this is Christian Machen and what he has DPCA. So what you could, of course, once you have a lot of cells, you can do PCA and look at the coordinates and see what you get. But he says, look, you did this experiment, so you know what the parameters you were playing with. So he has a way of binning all the spikes so that those spikes get their special, binned and what he can do with DPCA is, yeah, it's kind of like PCA, but you are constrained to put the things that are coding the variants of your parameters on their private axis. Did I say that right? Okay, so basically you can see here are all these axes and here's all the data, but here's the ones that are explaining your experiment and here we go, da, da, da, da, da. But what's amazing is that this is 16% of the spikes. 16% of the spikes has all the variants in the experiment. So A, I love thinking about this. I can go to sleep with a grin on my face, all these spikes that got memory, that are for nothing. But there's another thing, interpretation you can say is that the brain actually can do multiplexing easily. That's what it does. And so whenever you're looking, you're looking into a sea of lots and lots of things going at the same time. So here, okay, so here's where I need you. Okay, we saw this, okay? Now I'm apologizing for this stupid figure, but this is what I need to you understand of this. So basically what we're just looking at this, so we're doing it slightly differently and here's our picture and then we need our cell. So you have 250 cells in your library and here you've picked 12 and this stupid color bird thing that you can barely make out. You're supposed to, these are the, in the scale, these are your projections, little r's here. So there's 12 of these and it's there. And then for this one, the 12 you picked were there, okay? Yeah, if you thought that was, okay, here's another one. Okay, so if there's another one, here's for this one and here's another one. And you can see that these are all different, of course, but now let's go crazy and let's code this every gamma cycle. Every center, right? So here, every particular cycle, gamma cycle, you're re-coding it. So basically, you have this image and it, but it has to exist in time. So you're using this feed forward architecture here and you're doing that. But now, let's ask, that's a particularly frequency that you've picked. Let's, what if I had another picture? And I wanted to code it. Could I pick another frequency and make another one and just stick it on here? Okay, now, there's some bad news for me that is if you, if the other one has a different frequency, that means that they are going to crash into each other. So the different frequencies are running at different, different speeds. And so one, they'll be crossovers. And so that you would think that's really screw you over because all of a sudden when you have a blackout, except for, okay. So first of all, that's the interesting things. So what you're seeing is the cell in your work wing. And when you put these things together, basically a cell at different frequencies could be coding one picture or it could be the other. So in moments, it's trading off. It's okay. But we have to, what are we gonna do about the crashes? I think I had a way of doing it, but it was really, really bad. So it's so bad that I'm not gonna tell you, but this is really nice. There's lots and lots of papers that says that if you can load up the dendrite of a cell, it would veto the spike. So that's exactly what you want. So you have the two things colliding on your diet, then your cell comes, it takes off from the computation. And first of all, depending on your simulations, the networks is sparse. So these things running around is rather sparse. So if you throw away a couple here and there, it doesn't do anything for you. So, okay, maybe so, okay. So here, if you were a run of the mill experimenter, you might have a lot of spikes that you measured. But if only you knew the wavelength local, then you would know that actually you're running for about five, five different things at the same time, but then they're just coexisting to there. Okay, so that might be it. Let me see. Yes, so what did we see here? If you can do this, remember what's happened. Now a cell is handling a scaler just like you would do in silicon. And so that you don't have to count anything. You're going about three times order fast. And then the other thing is the Rajan and Osha's and many, many, many, many other people. That big network of the cortex is using Bayesian hierarchy. And so the basically, and when you do use Bayesian things, you have to be probabilistic. And so when we are, so the basically the thing that we need in this thing is just right. And then, and I also think this idea that if you can multiplex, everything's going to change. The way we think is completely going to change. It's a lot of fun. Now, I wouldn't be saying, you know, this is QED, but I'm just saying that this is a very interesting idea that raises a lot of interesting problems for everybody. And for me, it just seems like a lot of fun waiting for us to do. So thank you. Oh, oh. Okay, yeah. Thank you very much. This is Raj, of course. And the, you also saw Yanake Hayes work. And then this is Roshan, he is really, both, everybody here is really, these are the graduate students and here's the postdoc, they are very, very, very smart. And that's, this is Luke, who, he's the patch climber. And he's a lot of fun. And he's, his quality of his work is just beyond compare. It's really good. And so here we are. Okay, thank you. Yeah. I have very stupid question or maybe yeah, co-op part I lost. Very much your co-op part I lost through your talk. My question is that, so it's very interesting that maybe we have a two different process, right? And then they can coexist. But they can coexist by using different phase of a gamma or different frequency. So which one? So that part I lost, so. Well, I mean, you have to like Pascal, Pascal free. He does that. And so he has papers where he has the two networks fighting each other. And he's put their fight, they're playing with their global phase. So all the spikes are, you know, moving together but different on the two difference. Two different phase? Two different, yeah. Two different phase, okay. Yeah, and so there's sort of an apple is fighting an orange. So you have to, I think those are limited. For me, they don't generalize to any problem that you want. Yeah, then your model, yeah. This model is any model you want because it doesn't say what the synapses should be. So if you have a learning algorithm that plays with the synapses like everybody else, the thousands of people, then you can do that, you know? And then you turn the key and see what it does. Okay, thank you. I would say this, there's another homework for someone here is what I haven't done. So when in the slide, the ugly slide, we were looking at forward passes. And that's obviously the argument I gave is fine. But of course, what we need in the predictive coding, there's gonna be feedback too. So you're really asking a tricky mathematic question is that you have this network and it's doing the thing you wanna do in silicone. But there's all these, you have all this probability of runoff everywhere. And so you have to prove that there won't be any bad things that happen to you. And I haven't done that, but I think it's okay, but I don't have anything touching a theorem. Yeah, but it's all about, it's sort of linear algebra. And so it should be possible to connect everything up and let it run independently and have the two different frequencies running in a non-descriptive way. So, but for me, it's hard to get used to that the state in something is running around in a way that's not really under your control. But you're hoping that you're having the answer pop out the way it would in just like it would if you had a totally deterministic model. Instead of the probability one. But are you saying that even the predictive coding process, different predict coding process learns simultaneously without interfering? Yeah, but do that. Yeah, you should. That's very interesting. But I haven't, it's in my brain, I don't know, I think maybe I don't know if people, when I get in my brain seems to get an emotional feeling about whether it's possible. And then I have to do the work. And so at the moment, I've only got to the emotional part. But it's how hard it could be. It's all linear algebra. Okay, so other questions. All right, okay, yeah, yeah. So, oh yeah, okay. So first, where does gamma come from? And then do all the local neurons listen to the same beta rhythm? Yeah, yeah, I mean, oh, I see. Yeah, that's a rather challenging comment because I'm talking to the people in Donders and they have MEG, because I think the MEG would maybe have a test that would make things like your question accessible. But at the moment, I don't know. I think that it's sad that the gamma field doesn't make this distinction. So particularly, particularly they have broad bad, broad brand, red gamma, so that it mush-evers lots of gamma-frequency together. So you see these papers for gamma. Whereas I want to, for me in this model, 40, 43 hertz and 54 hertz, they're very different. And certainly, now it is in terms of the data that you saw, it exists, right? If you do these coming, you can see it. Now, whether it can be harnessed in like way to answer your question, I don't know. But it just seems like, I would want to hear your answer to this question. You're the expert for these things, but I think this idea of just mirroring of the gammas is, it seems to have no purpose. So the subsurge for the potential you showed using intercellular apache-cran recording, maybe they are like a collection about many, many small EPSPs and ICPSPs, right? So, and then the, what kind of input those neurons receive depends on the, what the synaptic inputs each neuron receives, right? So, even located in nearby, depending on the difference in the incoming synaptic weights, each neuron may hear a certain difference. We would ask Luke what that would be, you know, but I don't think you don't have the inputs handy. You can't mark it, they were just happy to get on to the cell, that's a triumph when they can do that. And they're trying to get more than one cell so they can see how the cells are talked to each other. That's, I think that's where they are right now. But I don't think they can question that question. If your question would be too hard for them. Okay. Yeah, any other questions? I guess it's related to this gamma question, but the encoding as a delta or a phase in the gamma cycle would have to be decoded later as also a delta in wherever the spike's ending up. Like, then it has to have a similar reference to mean anything. Yeah, I think the answer to the question you didn't ask is that a lot of times people, modulars, maybe yourself too, everybody says that if you have a distributed representation, who reads the answer? And so I think you just sort of whatever you wanted the answer to be, you just, you do it with wires. You know, you just, somehow the answer is like motor, cortex, you know, you want this particular code. And so you can just, you know, make a bus that can handle all the different codes that you have. And you're done, you're done, I think. Yeah, no, I can see. Did I get close to your answer? I think so. I guess I'm actually not a neuro person, I'm a physicist. It's all a little bit vague to me, but I got there. Modulars have distributed representation and they always get criticized, like, okay, how do we get the output, you know? Whereas I think you just, you know, it's all wires and whatever, you know, that's the way, wherever you want the answer, outer answer that you just say that has to anticipate the different things that you want to say. I think I'm always distributed. May I put a very stupid question? Yeah, go for it. I love, I think pretty coding idea is very beautiful. I like mathematically very much. But I often feel if predictive coding is correct, after extensive learning, neuron should become very silent, but often we observe something opposite. Neural increase responsibility to specific input. Ooh, say that again. So if predictive coding is correct, and after extensive learning, I expect some sensory neuron, most of sensory neuron becomes quiet, less active, because prediction becomes more correct. But actually what we observe is often kind of opposite. Many neurons are at least neurons. I see. Yeah, increase selective lesson. So this is my basic difficulty in understanding predictive coding. I haven't sort of answered to that, but I think I'm alone without colleagues. But predictive coding, the Bayesian idea, and the idea that you want distributions, that's right. So you want the cortex is gonna be everything. But I think that in the moment you wanted to give the likelihood, they're important, you know, so you don't, if you're dealing with some in senses, input right now, so that has a viral vital impact on truth to it. So you don't wanna be able to put that into the average, the distribution. So I think for me, that's the elements is does that. So in your brain, you want to keep everything in a nice Bayesian way. Okay, that's predicted, but in the moment, you want to be able to put the likelihoods in there and believe them because they are, they're right sitting right there. And I think the thalamus could do that. And maybe you, I don't know how much inanimate you care about, but you can take those equations and you can put a band, you can wire up those equations so that they go right through the thalamus. And if you wanted to have a network that would allow it to use maximum likelihood constraints. So I think that's, I don't know if that's an answer to you, but I think in as much as I'm thinking about, I've been worried about the predict difficulty. I don't think you can have it be a panacea. And also the other one, is that with the fixation. So I think that when you make the fixation, that you, you're asking a question and your brain, if a campus aware you're starting from, it knows what the question is. And then it sort of pushes it down to the thalamus and maybe a beta does that. And then that you do, you compute the answer and it comes back. So I think actually the brain works in these waves, I think. But and some people are doing experiments that have these characteristics in them. Because I- Thank you, yeah. I agree that we started to understand thalamus, function of thalamus just recently. You might be- Maybe we should talk some more. Okay, we can talk. But it's hard for me to think that the network, the massive network is fast enough so that you can actually, it can do real computation in useful time. And so I think it just, it puts all the constraints that you're named there. And then something else, maybe beta can read those fast enough to be useful. That'd be as far as I ever go. I think thalamus controls the operation mode of single cells, some explainer suggests. So I did not understand totally, but I think what you suggested might be at least part incorrect. Thank you, thank you. I'm welcome to any kind of discussion on that. Because I think it's in my brain, it's the field. It doesn't have, you know, a lot of people are very good experimenters, but in terms of, I think what really for the brain, you really need like more 10G, right? That can think abstract what the model should be before you, you've actually got all the data right in front of you. I think that we as a group, be fun to just throw these ideas around and see if there's something that we should have thought before now. I think the big thing is the brain has to work somehow. You know, and so we have to guess what's it doing. Okay, so I think we spent a lot of time already. So then once again, thank you very much for Dana. So thanks for the talks. Thank you for coming. And as I said, it's an adventure for me to be here. Yeah, thank you for...