 Okay, and now we are we're live already in YouTube. Okay, hello everyone, and welcome to another talk of SAS exhibitions. We are part of Worldwide Neuro Talks, which is a, which is a, sorry, which is an initiative part of Worldwide Neuro Talks. And this one, we would like to thank Tim Boggels and Panos Bocellos for organizing it and starting and starting it and it's, it's quite good because it's, it tries to get a more greener and more accessible kind of talks rather than the traditional seminars. So for you to know the talk will be roughly 45-50 minutes and then we will ask the speaker the questions you want to ask in the chat. So while we are in the talk please post your questions in the chat. And also I would like to, to encourage you to join the channel, YouTube channel if you want to, if you don't want to miss other other talks that we, we have roughly every, every week. So today we have the honor of having here Professor Peter Rosema, and he, he started as an undergraduate, he studied medicine and received his MD degree at the University of of Groningen. And for his PhD work, he worked in the group of Professor Wolf Singer at the Max Plan Institute for Brain Research in Frankfurt, and he received his PhD degree from the University of Amsterdam. He continued doing a postdoc at the Max Plan Institute and he did another postdoc at the University of Amsterdam in the group of Professor Henk, Speaker and Sound. He did a fellowship of the Royal Academy of Sciences of the, of the Netherlands to work at the University of Amsterdam. Then in 2002 he started his own lab in the Netherlands Institute for Neuroscience where he has, he was appointed actually general director in 2007. And he has continued on this role to the present day. He is also a strategic professor at the University of Amsterdam and also professor at the AMC in Amsterdam. And he's received highly competitive grants like the NWO BT award in 2008 and an ERC advanced grant in 2013. In his lab he studies visual perception learning and memory in the visual system in humans, laboratory animals and artificial neural networks. And an important goal of his lab is to develop a visual prosthesis that good and low people who have become blind to regain a simple form of sight. So, welcome Peter and we are looking forward to hear your talk. You can screen share now. Okay, thanks Antonio and it's a pleasure to be part of this open access series of lectures. Let me share my screen and the PowerPoint. Antonio just taught me a trick to use a laser pointer. So this is what we're going to talk about. So neural mechanisms for conscious visual perception. And towards the end of my talk I'll also see a few words about how we might use our fundamental knowledge to actually restore a form of vision for people who are blind. So this is the overview of what I'm going to talk about today. So I'll start with an introduction where I introduce the concept of feedback processing. And then we're going to apply these ideas to the study of visual awareness, the role of awareness in perceptual organization. And then towards the end I'm going to talk also about applications of this fundamental knowledge. Let's start with the introduction. So if you see this picture, I think you may have seen it before, it's from a very famous paper 30 years old now by fellow Meneflesen who studied the visual cortex of macaque monkeys which is also a very important model system in our lab. And all these colored areas here contribute to visual processing. And they analyze the anatomy of the connections between those brain areas to come up with this visual cortical hierarchy also, I think very well known. My information enters in primary visual cortex coming from the retina through the LGN. And then visual information processing starts with this first feed forward sweep of information processing that is well modeled nowadays by deep convolutional networks. And in the visual brain as in deep convolutional networks you'll find that low layers, low level areas, go for simple features such as the orientation of an edge. If you go to somewhat higher areas you find neurons that are tuned to feature constellations of intermediate complexity, like going for particular shapes, and then higher up in the hierarchy you find neurons that are tuned to object category. And now what fellow Meneflesen also described are these feedback connections here in blue that propagate information in the opposite direction and lateral connections between areas at the same hierarchical level that allow basically for a re-circulation of the activity. And today I'm going to describe to you how we believe that it's important for particular visual processes that we capture under the heading perceptual organization, basically that is important for grouping all image elements that belong to a specific perceptual object. So before I go into recurrent processing, I would like to describe a very simple study that we carried out in monkeys, in which we're interested in the neural correlates of visual awareness. And the trick that we use this, we use very weak visual stimuli where you sometimes see them and sometimes don't, so that you can compare the brain state and the neural activity between those two conditions. So what I'm going to do, and since this is broadcasted on YouTube, I'm not going to see your reactions only Antonio's. So I'm going to invite you to participate in this sacrificial experiment. And I'm going to present a very simple visual stimulus you can look at the fixation point here around here. If you see it, Antonio, please raise your hand and if you don't, don't raise your hands. So now it's going to start 123. Now this is the fixation point. The stimulus is yet to appear 123. Yeah, that was easy. And now I'm going to make it more difficult 123. Okay, you didn't see it. Now another one 123. You saw it so it was more difficult to see and now last one which is also quite difficult to see 123. Okay, you didn't see it. You're perfectly right because there was no stimulus on the second and fourth iteration. So sometimes people respond nothing to be ashamed of it's called the false alarm and it happens all the time. And this task and the responses that people give are well described by a very famous theory from the 60s described, for instance, by green and sweats, and it's called a signal detection theory. And the idea is that if you present the reason was that it gives rise across trials to distribution of activity in the brain, internal representation of signal strength that is here modeled as a Gaussian distribution in green. And so on every trial you'll be sampling from this distribution in your brain. And so on a particular trial, the internal representation might be like, like this strong. And according to the signal detection theory then there is a threshold. And whenever the internal representation is higher than the threshold you're going to say yes, it's called a hit, but it may happen that you just kind of feel to see it. So the activity stays below the threshold so you're going to say no, although there was a stimulus and that's called a miss. Now if there's no stimulus, then you're going to sample from the black distribution which is shifted to the left because of course there's a weaker signal. Typically it will say below the threshold. So you'll say no, and that is correct. So it's called a correct rejection. But sometimes due to the internal noise in your visual system, you might have to see the stimulus that was not there. And that's called the false alarm. Okay, so very basic signal detection theory. It's interesting to consider what the situation becomes if on average the stimulus would be stronger, of course then the green distribution shifts to the right. If you now apply the same thresholds, you would kind of still have quite a high number of false alarms that's the part of the black distribution to the right of the red line. So it might be a good idea now to shift your threshold a little bit, make it higher, so that you will have fewer false alarms and in this situation it will not come at the expense of a lot of extra misses because the signal on average is strong. However, if you apply this higher threshold to the situation of a weak stimulus, then you're going to have quite a number of misses. Okay, so and that is what is well described by the signal detection theory, how to put your threshold. And it also depends on how costly it is to make misses and how costly it is to make false alarms. For instance, if you're working in security at the airport. You might want to put a very low threshold because it's very costly to have misses and it's not so costly to have false alarms because you simply open the suitcase and you check it once more. And this is a very famous theory. It is very useful for the analysis of psychophysical data. And what we did not know so far is where in the brain is the signal, this internal representation signal strength. So that's the question that we asked. We also asked ourselves what causes the variability cost trial so the width of this distribution and what in the brain corresponds to this threshold. And when we started to analyze the data, we soon started to realize that it has many similarities with a theory about consciousness and that's the so called global neural workspace theory. The process follows. So if you present a week stimulus, it's basically starts in the LGN goes to primary visual cortex and it's propagated up the hierarchy, and it will also activate neurons in primal and frontal cortex. And then according to the theory there is a process called ignition, I'll say a few words more about it later that then stabilizes itself, so that if you even take, even if you take the stimulus away there's still a trace of the stimulus that you just saw. I think of it as a sort of working memory trace persistent activity caused by this very weak stimulus. Now it can also happen that for the very same stimulus this ignition does not occur that will be a miss and if it does occur, it will be a hit. And so this is what we wanted to study. So we had monkeys on a very simple task, actually the same task you've been doing a moment before. So we had monkeys looking at the fixation point, then we presented the stimulus on 50% of trials. And then after delay, the monkey reported that he had seen the stimulus by making an eye movement to where it had appeared, but on 50% of trials there was no stimulus and in that case the correct response would be to make an eye movement to this dot that was always on the screen is called the reject monkey to report that he didn't see anything. Okay, and what we then did is vary the contrast of the stimulus. So the contrast is high, the monkey always saw it, and we lowered the contrast and at some point of course accuracy started to decrease. The threshold, threshold high, which was 80% and threshold low, which was 40% accuracy, and we considered stimuli that were below threshold low as very difficult to see those between the two thresholds as intermediate and difficulty and those above the high threshold as easy to see. So we recorded activity in V1 in area V4, which is at the intermediate level and also in Garza level prefrontal cortex. There we recorded from a frontal area involved in the generation of eye movement and here is also a caveat that I have to worry about. So although we're interested in consciousness we are not going to dissociate consciousness from the plan to make an eye movement to a particular direction in digital space. When we started to analyze the data we were in for a bit of a surprise. What I'm showing you here is this V1 activity primary visual cortex in response to a difficult stimulus and intermediate stimulus and an easy stimulus on average. And also I'm separating the hits from the misses, and we saw that in V1, for instance for these difficult stimuli that these hits gave rise to a higher level of V1 activity than the misses. And that is the difference is already there from the first spikes onward that are elicited by the stimulus. But it basically means that there is some variability in the information that propagates from the eye to primary visual cortex. And so in some trials the activity does not propagate so well and we get weaker response than on trials in which this propagate well. And you see that for the intermediate stimuli the difference is even a bit more exaggerated and for the easy stimulus the difference is still there although maybe a little bit weaker. You may also see that the level of activity attained in V1 by an easy stimulus that was missed was even higher than the V1 activity elicited by a difficult stimulus that was seen. And actually we made sure that contrast here on average were the same. So this is not explained by the differences in contrast of the stimuli. So this basically means that a certain level of activity in V1 is no guarantee that it will enter consciousness. So suggesting that even if V1 reaches a certain level of activity it may still be lost downstream from primary visual cortex on routes to higher visual areas. We also recorded activity from area V4 and there the situation was very similar. Also in V4 the missed easy stimuli elicited more activity than the difficult seen stimuli. So also a certain level of activity in V4 is no guarantee that it will enter consciousness. Now when we recorded from Dersaleta prefrontal cortex the situation was really different. So here there was really a sort of bifurcation so that whenever the animal was going to report the stimulus there was a high level of activity. And whenever he fails to see the stimulus there was a lower level of activity although you see that on these easy trials activity ramped up. But apparently not enough to cause a hit response and then eye movements towards receptive fields. This is the only area where you can really put a threshold kind of compatible with the idea of an internal representation of signal strength. Now we also looked at those trials in which there was no stimulus in the receptive field. And we look at the correct rejections here in black and you see that activity was always low and it stayed low. But on the false alarms it ramped up and it reached the same level as on the seen trials. So here the animal makes an eye movement to a stimulus that was not seen. And we actually suspect but we cannot prove that what's going on here is that there are spontaneous ignitions spontaneous ramps big ramps from a low level of activity to a high level of activity. And these ramps happen at different trials and different time points. So if you can average a number of these ramps that are by themselves very quick, but that happened at different time points, then in the average you're going to see the slow ramp. There's something we suspect but we cannot prove. So if there is a brain region that has properties in common with the signal that in theory then it's dorsolateral prefrontal cortex. Now here, as I said, we're not dissociating eye movements from from the conscious perspective itself, but we suspect that it's one of a number of areas that has this this behavior and there may be some other areas in the brain. But you also see that and in a response modality independent manner so where you would also see the same effect if the animal was reporting the stimulus in another way. Okay. Guys, I presented this data to stand behind with one of the people who really was behind the global workspace theory. And he said, you know, that's precisely what I would predict and we've made models about this. This is one of these earlier studies with such a model. And he said, you know what I'm going to do. I'm going to make the simplest model possible that can account for most of your data. And it's a model he came up with and it's a model that has five neurons, five neurons are modeling the whole brain. And there's a neuron in the LGN. And then in V1 there's a V4 connection from V1 to V4 and then from V4 to prefrontal cortex and then to frontal cortex. You see there are also some feedback connections and their self connections. And the idea is then that activity gets propagated. And in the model he made the connections between prefrontal and frontal cortex so strong that if a certain level was reached that they could self sustain their activity level. So then we basically clamp to a high state and you got persistent activity. And even if you then take away the stimulus, the activity will persist. So then it becomes a working memory trace of the stimuli that the model you saw. All these, all these neurons, these five neurons, they are stochastic. So if you present, so that it would be a hit trial, if you present the same stimulus on another trial, it may happen that activity in prefrontal cortex is just too weak for this level of self sustained activity. If you then take away the stimulus, then activity will also decline in the rest of the network. So you miss. And this shows you the one activity in the model. So you see also that hits elicit more activity than misses. Actually, it's probably the other way around if these neurons in V1 happen to be active a bit more, then it's more likely to cause also a high level of activity in those areas that are responsible for self sustained activity. So you see that a certain level of V1 activity is no guarantee that it will cause ignition because the strong mist trials elicit more activity than the hit trials as we saw in the neuronal data. And before we basically see the same kind of effects. And if you then look in the frontal areas. So this is frontal cortex you basically see this bifurcation behavior that is caused by this ability to self sustained activity. So if you look in the model at the false alarms and the correct rejections you see correct rejections always have a fairly constant level of basically weak activity, hardly any activity. And you see that on these false alarm trials. There is this slow ramp and in the model we are sure if you look at individual trials you see that activity kind of fluctuating a little bit because of the noise and then certainly it reaches the level due to the noise. And that gives rise to self sustained activity. It happens only I say I'm 22% or 3% of trials. And that happens in different time points in different trials and if you then efforts together you get a slow ramp. So this is something that Stam kind of put together I think on the on the rainy Sunday afternoon. And this is our data. So that was collected over four years of meticulous experimentation. So that's the symmetry between people who do modeling and people who do electrophysiology, which you see that the model gives actually quite a good account for many of the aspects that we see in the neuronal data. Okay. So what we see here then is that the variations in internal signal strength the profiles that are well explained by variations in the efficiency of fit for processing from V1 to those areas that produce self sustained activity. The threshold for perception is basically this bifurcation point if you give activity in those areas that can sell sustained activity reaches this level, and it clamps into the high state, and this will give rise to a hit. And if it stays below, then it will be a miss. And that's a level for where this persistent activity gives rise to what we call a working memory. And finally we see that false alarms can then be explained by spontaneous ignitions. Okay, so this is a brief story about the neural correlates of visual awareness. And now I'm going to tell you that this, this is not the whole story that they're also more complicated forms of visual awareness. So now we're going to talk about more complicated stimuli that go beyond just detecting whether there was a dot of light or not. And now I'm going to talk about scenes like this. So if you look like this picture, or if you look at this picture, you see that there are multiple animals. And your visual system quickly puts together what belongs to what for instance you don't have a hard time seeing that the nose of this zebra is kind of belong to the same animal as say this lag. And what we proposed already a number of years ago is that this is that this is a challenging process and that the visual system really has dedicated processes to cause this perceptual organization. And this is necessary because vision actually starts with very local analysis by neurons with tiny receptive fields that initially seem to only be concerned with a very small fraction of the entire image. And so, but this is not how we perceive. So there must be powerful mechanisms that put together all these image fragments that belong to a single perceptual object and we call this incremental grouping, and propose that this is carried out by the incremental by all the image components that belong to a single perceptual group with enhanced neural activity so enhanced neural activity basically is propagated over all those image elements that need to be bound in perception into a coherent object representation. Okay, so if this is the image that comes in and you're interested in the central zebra your visual brain is going to create this labeled representation where all those image elements that belong together are labeled with enhanced activity. So there's actually no need for synchrony here. So we also looked at synchrony synchrony is completely unrelated to binding. Okay, now we want to understand how this works. So now we have to think beyond speed forward connections and we also have to consider these recurrent connections that allow for recirculation of activity. And to demonstrate this I'm going to present to you very briefly a stimulus and try to see what is there. So 123 So Antonio, you're my suspect, or you're my subject. Did you see what was there? I think it was a bird. Oh, you're really good. That was a bird. Okay. So many people look at this, maybe I presented it for too long. But many people see that there's some brown and some green stuff, but there's just not enough time to create the perception of the birds, but you're just a very good observer. So there is a bird here. And what you find is that this takes a little bit more time than just perceiving that there is a dot of light at a particular location in the individual field. Okay. Now we're going to talk more about this process with perceptual organization and a workhorse in the lab has been this figure ground texture segregation task. And also that's a test that has been around for a long time. And I like to refer to a good colleague of mine, Victor Lama who already published about this more than 25 years ago. He recorded from primary visual cortex and in green you see the rectangle, and that is a receptive fields of a group of neurons. And here there are image elements in the receptive fields that belong to a figure. And here, if you look carefully, exactly the same image element, but now they belong to a background the difference between those two images is outside the receptive field so this is a contextual effect. What he demonstrated is that if you present this stimulus. Then there's first of the fourth response that does not discriminate between figure and background. You have to wait a little bit. Say 100 milliseconds or so and then the figure starts to elicit more activity in the brain. And here in primary visual cortex, then the same image element that belong to the background so we think that this is a contextual effect. And the speculation has been that this is caused by feedback connections from higher visual areas back to primary visual cortex. In fact, we're able to show that so today I'm going to convince you and present some experimental evidence that this is indeed what's going on. So this is what I want to talk about. So how this is working the brain so I'm going to give some mechanistic explanation how we think that this is this can work. And I say a few words about the role of visual attention. The role has always been in the lab to demonstrate that these small effects that you see in primary visual cortex that there will be matter for perception. And then I'm going to present the data that this need matters also and it depends on feedback from higher visual areas back to primary visual cortex. So how this is working the brain if you want to ask such a question is always good to look at the classic psychologists and here are a few of them actually Steve Gross work as a modeler. And what you then find is that there are proposals for two processes that happen in the brain. The first is a process that is sensitive to the difference in orientation that happens at the boundary between figure and background so that's a boundary detection process. And the second process that has been proposed is called reading filling are really growing and that works as follows suppose you select image elements with this left orientation. Then you might want to co select neighboring image elements with the same orientation because they're likely to belong to the same sexual object. Okay. Now, how this is working the brain them. Well, if you're interested in boundary detection, the proposal has been that this relies on what is known as also orientation inhibition inhibit three connections between neurons tuned to the same orientation. It shows you some of these models. Now suppose you are a neuron you're tuned to the right orientation your receptive field happens to fall in the background. Your neighbors tuned to the same orientation are also highly active so they will supply you with strong inhibition. The same will happen if your receptive fields is in the figure and you tune to the left orientation. You get a lot of inhibition and only if you're close to the boundary. Then you get inhibition from only one side so at the boundaries there's a relative release from the strong inhibition. Okay, so you get more activity at the boundaries. Now we're going to talk about the opposite process already the complementary process known as reading growing. And the idea is that suppose you are a neuron your receptive fields is in the center of the figure. Then you want to and suppose there is extra activity maybe there was a queue at this position. So to convince our neighbors that they should also start enhancing their response, especially if they're tuned to the same orientation right you want to want to spread the enhanced activity to the neurons into the same orientation so that requires ISO orientation excitation, actually exactly the opposite connection scheme. And then activity will spread until the entire figure or region is filled with enhanced neural activity. And here we have a bit of a problem, because if you want to make a neural network that does both boundary detection and reading growing you see that you need the opposite connection schemes. So this won't work unless you have a clever trick. And we believe that the brain came up with a clever trick and that's the following. We believe that this boundary detection process is closely tied to food for processing. And reading growing our reading filling process is associated with feedback connections. Okay, and we made models that demonstrate that you can implement that in an artificial neural network. So this will be one of the input images to the network. And here we have primary visual cortex here the neurons that are tuned to the left orientation that respond to the figure. And the right orientation that you respond to the background of course in reality these two maps would be intermingled, because you have these orientation columns and all these locations in digital space. But in a model you can easily keep them separate. And in the feed for pathway we implement this ISO orientation ambition scheme so you get extra activity at the boundaries. You see here and here and you can then superimpose a map you see indeed there's extra activity at the boundaries. Okay, so that works. And the trick now is to not only do that in primary visual cortex to repeat that connection schemes in higher areas where receptive fields are larger. So if you then have a small figure, you kind of see here at the boundaries and at some point you reach a level where receptive fields are so large that they completely cover the figure. The interesting thing is that the neighbors of these neurons, they are all on the background. So, neurons to the left orientation here they can hardly any innovation anymore. And that's what psychologists sometimes call pop out. You present a larger figure, you'll also get pop out but it might might happen at the higher hierarchical level. And all happens in the feed for processing pathway, and it nicely corresponds to the idea of psychologists that pop out is a what I call a pre attentive process. It happens basically automatically stimulus driven. You want to use the feedback connections to fill in the center of the fear with enhanced neural activity and you can make this happen. And there shouldn't assumptions and I'm afraid I don't have time to really go into the details of that but I can answer questions about it but there are some tricks to make a feedback connection such that the activity stays nicely focused on the figure, the receptive fields at higher levels are much larger and much more blurry than those nice small receptive fields at lower levels. This is pre attended and the prediction of this model is then that this reading filling process might depend on visual attention. Okay, because it depends on the loop to the higher visualize and then back to primary visual cortex. But that is something that we tested in an experiment that is already, I guess, eight or nine years old, but it's really interesting to see what's going on there. So I'll cover that. And basically what we did is we trained monkeys and they looked at the fixation point in the center of the screen. And then there was a figure. You see there's also some gray zone that is the example location of a D1 receptive field that was not visible to the monkeys. And on particular regarding days we call them figure detection days the animal was supposed to make an eye movement to the center of the figure. But on other days we trained the animals to completely ignore the figure because now they had to mentally trace this curve to the other end to make an eye movement to the end of this curve to our larger red circle at the end. And these experiments were carried out by Jasper Port. So on the figure detection days the animal was making an eye movement to the center of the figure and a curve racing days he completely ignored the figure. So now we are manipulating attention to the figure. This is a V1 receptive fields and on some trials the receptive fields would fall in the center of the figure with the cross trials we kind of varied the location of the figure so sometimes the receptive might be falling on the edge. And then there might be completely on the background and you should realize that there are always texture elements so the feed forward drive of the neurons is always the same. Okay, now we replicate basically the effect that Victor Lamasaw already in 95. So in red is the activity listed by the background elements and in green and blue are is the activity listed by the figure all elements that belong to the figure. What we were interested in here was the difference in activity listed by figure and background so what I'm going to do in the next in the next graph. I'm going to subtract the background response in red from the figure response so that you can see where are these extra styles that are driven by a figure. And this is a bit of a complicated graph. I have to realize that the figure is larger than the receptive field receptive field here in green figure here as a square. And this is the spatial dimension at zero the figures are receptive to this precisely in the center of the figure. At two degrees, the receptive fields on the edge of the figure at four it's in the background at minus two it's on the other edge and a minus four is again in the background. And you see here's the temporal dimension though so this is the time of these extra spikes. And what you see is that these extra spikes are first elicited by the edges between figuring background and then there's a later filling in process that fills in the center of the figure within has neural activity. Okay, and this is what happens when the animal is paying attention to the figure this wasn't the figure detection days now invite you to make a mental picture of your prediction of what the activity should look like if the animal is not paying attention to the figure. This is what it looks like. Okay, so you see that the boundaries between figure and background are still there with the region filling process is much less complete. So this is in accordance with the idea that this is boundaries here they're detected by a few forwards demonstration process with this region filling process really depends on the loop to higher visual areas and back. And it doesn't happen so much. If you're not attending to the figure. It's the same sort of result in area before where receptive fields are larger. The receptive fields here more or less match the size of the figure so here in the field for the activity, you already see the filling in of the center of the figure early on. And you see that these extra spikes are much less produced if the animal is not interested in the figure. I also have some evidence from from a micro stimulation experiment that if you silence these four. So you take away the spikes that the filling in process and V one also becomes becomes less. So it really seems to rely on feedback from the high region areas but I'll come back to that and then off the genetic experiment later. summarize this part of my story for us follows. So the one is a map of space, and there's some spontaneous activity then you present a figure ground stimulus. So activities increased. And then a little bit later you see also extra activity at the boundaries between figure and backgrounds. And that happens irrespective of whether you're attending this figure or not already you're attending the stimulus or not. And then wait a little bit longer you see a filling in of the center of the figure with enhanced activity in V one as well. It's particularly strong if you're attending the figure but much less so if director attention elsewhere, as if these non attended figures are left in a more primordial state. So early boundary detection is a pre attentive process and is later filling in seems indeed to depend on attention as predicted by the model. A question that we were always very interested in demonstrating but that was difficult for a long time and that is, do these subtle effects. This response modulations that we seem primary your cortex do they really matter for perception are they just an epiphenomenon. And to do that we wanted to use optogenetics to silence primary your cortex and we tried that in monkeys but it turned out that the technology is still not completely reliable up to genetics and monkeys is especially silencing is quite difficult. We wanted to change species, we wanted to go to mice, but this is just a brief intimate so we a few years ago we were able to also record activity in area be three of a human spiking activity. The reason is that this person had epilepsy and her epilepsy always started with visual hallucinations. In neurosurgeons they advanced a so called SEG electrodes in individual cortex. And at the end of this SEG electrodes there are micro wires so that we could record spiking activity. And what we then saw is that indeed also in the human figures elicit more activity than the background. Just as we saw in the monkeys and you see here it's early on, because the size of the figure here was about the same size as the size of the three receptive fields. Now to go to our council experiments it's very useful that this effect also occurs in mice. This was nicely worked out by a few years ago. So in the mouse we have primary visual cortex and also their figures elicit more activity than the background. Okay, so I'm now going to train mice. And this is these are different stimuli that we can use for mice. And he trained them to make a lick movement. So when they see a stimulus be the figure on the background or contrast to find stimulus on the left, the mouse is left. And if the figure appears on the right, the mouse looks on the right side, there's a double sided licks but it shows you their accuracy they're not as good as monkeys are. So if in the contrast task they reach about 90% and you see in the figure ground tasks orientation you find face you find the texture you find figure ground stimuli. They're 70% correct or so, not so good but good enough to give us a signal to work with. Okay, so we didn't first started to record activity in primary visual cortex see whether we find the same effects as we had been seeing in monkeys. And the answer is yes, you basically see the same effect. So this is an effect that generalizes across species. So here is a contrast stimulus. So this is maybe a bit of a boring experiment there's something in the receptive field or there's nothing in the receptive field. Then orientation defined figure ground stimuli figures elicit more activity in the background, same for face defined and texture defined stimuli. And you know what the latency of these effects are you can simply subtract the background response from the figure response as we do here. You see that the contrast stimulus so this is just the latency of the visual response has a latency of 60 milliseconds, orientation defined figure ground it's 95 milliseconds and face defined and texture defined figure ground happens even a little bit later. Now we are a really cool position, because in mice we have very powerful tools in which we can sign with this we can silence activity neural activity. You can basically wipe out all the spikes and this is an experiment that was carried out by Lisa Lisa here worker and lab. What we're doing is we injecting a very strong optogenetic silencer GT ACR to. And if you then shine a blue laser, then you basically wiping out all the spikes from primary visual cortex. So we can indeed by shining the laser light early on you can basically completely silence the one and knock out the entire user response. We can also start shining the light a little bit later. So that you let through the feet forward response which you selectively silence all only this late phase, which you see this figure ground modulation. Or you can also let through a little bit of that phase and so on and so forth. Okay. So this is the, this is the experiment. And this shows the results in the stimulus, which did not require any figure ground segregation because it was a contrast stimulus. And this, this ground is a little bit complicated. But here on the right side is when we did not show any laser light. So the gray symbols are individual mice and green is the group average. So here on the left you see what happened if you start shining the light at about 20 milliseconds and you see that the animals become worse. So here are about 60% correct. So they're not yet at chance level, which suggests to us that some of this accuracy might be driven by pathways that bypass primary visual cortex maybe through the superior curriculus. And then start postponing the light to later moments you see that if we postpone the light to what is it maybe 70 or 80 milliseconds or so that the animals are back to ceiling performance. Okay, so if we let through this amount of activity, the animals can do the task. So the contrast detection task does not seem to require any late view on activity. The figure ground task is very different. So if we start shining the light early on knocking out all the spikes, you see that the animals are basically a chance level they can't do the task. And we have to postpone shining the light to maybe 120 milliseconds or so before they're back to their ceiling performance. Okay, so we were really excited about this. So here's the first evidence that animals really for the field ground task need a little bit of this late activity in which we see this figure ground migration. And then the face defined task, this, this amount of activity that they need is even longer. So here they need maybe up to 160 milliseconds or so before they're back to ceiling performance. I'm really excited about this. I think this is the first demonstration that you need to you lately one activity to do figure ground segregation. Now we did a very simple variant of this, of this approach to demonstrate that this big figure ground modulation that we see in primary visual cortex relies on feedback from high pages varies by simply injecting now the viruses in the ring of areas around primary visual cortex For those of you are not so familiar with mouse visual cortex. You have primary visual cortex here in the center, and most of the mouse higher visual areas are situated in a ring around primary visual cortex. So we're basically trying to target as much of these high visual areas as we can with only for injections of the virus. This shows you the virus expression. So it's nicely surrounding V1, we're trying to not have any virus in V1 itself. And then we present the figure ground stimulus. So the animals, not even doing the task we just record primary activity in primary visual cortex here. That's where this thought is. And this shows you the figure ground modulation in V1 that we always see. So this is when we don't shine the lights. And this is when we project laser lights and we start shining the light already early on. So all these higher areas and our silence. And you see that the C4 response stays the same, which is expected right this is just the information that comes from the LGN. But you see that this figure ground modulation this difference in activity is much reduced. This is also true when we averaged across a number of recording sites in different mice. So we're basically able to abolish a high fraction of this difference response, this figure ground modulation demonstrating indeed, as we suspected that it depends on feedback from higher visual areas to primary visual cortex. Okay, so we conclude that figure ground modulation in V1 depends on feedback from higher video areas. Now another corollary here is that we now start to understand how the binding problem is solved. The binding problem is grouping all those image elements that belong to a single perceptual object. We see that they are labeled by enhanced activity in early visual areas including primary visual cortex grouping together all those image elements to a coherent representation. So if we interfere with this process, then object perception gets lost, at least we demonstrate that for figure ground modulation. Okay. And another thing is, if you now compare that what I talked about in the first part of my talk where we saw that this recurrent processing needs to be focused in high visual areas. So it tries to work in memory trace. In the figure ground task, it seems that this recurrent processing reaches back all the way to primary visual cortex. So if you're thinking about the global neural workspace for conscious perception for this more complex, then maybe even be necessary for these early visual areas in primary visual cortex to contribute to conscious Okay. So now in the last five or 10 minutes or so I would like to say a few words about how we apply these ideas to create a visual cortical prosthesis for people who are blind. In the world, there are about 40 million blind people. And in many of those, the connections between the eyes and the brains are lost, because they have very severe damage to the eyes, also causing damage to the Genshin cells that actually give rise to the virus in the optic nerve. So then there is no information anymore from the eye that can reach the brain. In that case, those people will not benefit from a chip in the retina. You really have to look at the next processing stages. And we're here focused on primary visual cortex, which is a very nice retinal topic map of visual space. And we know from previous work that if you enter electrodes, we are using these user electrode arrays. So these sort of nail beds with contact point at each tip. If you stimulate group of neurons and primary visual cortex, then a person can also be a person who has been blind for several years will perceive a dot of light at that location in the outside with your world where you're stimulating in the map in primary visual cortex. Now the idea is to then have a lot of electrodes say 1000 or so, so that you can create phosphines. That's how we call these artificial visual percepts or pixels. So if you have 1000 electrodes, you can produce 1000 phosphines at different locations in the visual field basically where you're injector or you put your electrodes in the map. And by then stimulating only a subset of these electrodes, the idea is that you then can convey visual information into the brain of somebody who has been blind for a lot of time. So this is how such a future system might look like so the person wears a camera and camera images are in process in a portable device that translates camera images into brain stimulation patterns as you see here I hope to see the outline of a woman here. And that are then sent wirelessly to a brain chip that then activates the correct electrodes in the visual brain based on their retinotopy. And recently we wanted to test whether this can work so we wanted to do sort of a proof of concept study. And this has been carried out by seeing Jen in the lab and basically implanted 16 of these electrode arrays, each of them had 64 electrodes in primary visual cortex of monkeys. This is what it looks like. So this is a connector which is transcutaneous. So on the outside, you see this, and this is kind of screwed down onto the skull of the animal. You see wires actually wire bundles and at the end of each of these wire bundle is an electrode array with 64 electrodes. And we implanted it. So if every square is one of these arrays 14 of them were primary visual cortex two of them in area before. And if you then map out the receptive fields these animals were not blind they could simply see so we could basically sweep a light bar and determine where the receptive field of all these neurons are and you see this nice retinotopy so these are electrodes that have receptive fields at the fovea. And we basically replicated the same effect in another monkey. So we're able to create many hundreds of phosphines at different positions in the visual field because we have electrodes everywhere. And what I'm going to show you now is what the activity looks like. Maybe I have not to switch to a arrow. Yeah. And what you see here is a mockup. This is of course not what you can see during the experiment so here are the arrays animals looking at this fixation point, and we project the activity of all the neurons onto the arrays. And you see that a moving light bar gives rise to a wave of activity because it's successfully enters into the receptive field of different neurons. And we can also met them out in visual fields so now we're predicting the activity but now we're showing the activity at the location of the receptive field in the outside world. Okay so here again you see the activity. And you see that the bar of life is followed by a wave of enhanced activity and the delay between the bar and the wave is simply the delay between the cell in the retina and the cell in primary visual cortex is about 40 milliseconds. So that works, but we're not interested in recording we're actually interested in stimulating. So we use the task it's the same task I've demonstrated before the animal makes an eye movement to adopt. And the animals were trained extensively. And the trick is we're now not going to present any goals, we're just going to replace it by the electrical stimulation of one of these hundreds of electrodes at any one time so one electrode only. And here you see again the receptive field centers, and the receptive field of the electrodes that they were stimulating us is going to be highlighted in white. And in magenta you see the fixation position of the monkey. So the monkey starts a trial by looking at the fixation point. And then we're stimulating a group of neurons, and you see that the animal makes an eye movement towards the receptive field of the neuron. So that's how we can see that most of the electrodes are working, and now proof of principle. So the proof of principle was of shape perception, but now happens if you stimulate multiple electrodes at the same time. So to find that out, we trained the animals on another task in which they saw a letter, a letter of the alphabet here, the letter T. And then after a delay there was a choice menu and we trained animals then to make an eye movement to the letter they just saw. Okay, and now the trick is, now we're going to stimulate multiple electrodes at the same time with the receptive fields that outline a particular letter, so here the letter A, hoping that the animal will indeed make an eye movement to the corresponding letter in the choice menu. And this shows you the results. So now I'm showing you the other receptive fields that are stimulated in white. The letter L, and the animal makes an eye movement to the corresponding letter letter O. So we were really, really happy when we got this result because this was preceded by a lot of training, training animals to recognize the letters and we also actually trained them on versions of the letters. In the individual display where they were composed of small dots because that's how we believe that these letters will be perceived by the monkey. Okay, so this is really important. It shows that it's possible to convey shape information about letters by directly stimulating a number of electrodes at the same time. Now in the meantime, we started to work together with the group in Spain, by a wider friend on this, who are aiming to do this in a human. And this is a paper that came out last week. So this is Bruna Brunadetta, and she was implanted with a Utah electrode array, actually only 100 electrodes. And she had been blind for more than 10 years. Okay, so this is how the electrode was implanted. It was implanted at the boundary region between D1 and D2. And we stimulated a set of electrodes here in the shape of a letter O, and she indeed perceived a large letter O. He was stimulated only four electrodes and then she perceived a small letter O. So these results that we saw in the monkey indeed seem to generalize also to human vision. But it was not always as simple. So here you might not have predicted that she would perceive a letter I. And here we definitely did not predict that she was going to perceive a letter L. Okay, so I think what's going on here is that in her visual cortex, we only implanted a very small array that is only four by four millimeters. If the realize that primary visual cortex has a surface area of 25 square centimeters. And if you stimulate electrodes that are so nearby you can expect a little bit of interference here and there. And that's why we do not always predict what she's going to see. I think we'll require further study. Okay, and I'll end with a small video of what such prosthetic vision is going to look like in the future if we get this to work. You can just put on a virtual reality glasses and then kind of try to simulate what such phosphine vision is going to look like. Okay, so it's definitely not going as going to be as good as normal vision. But for people who are blind, it might make a big difference because it might help them to find their routes going to the supermarket and it also helps them, maybe in reading or in seeing the facial expressions of people they're talking to. So with that I'm going to close. So in the first part of my talk, I talked about the conscious perception of week stimuli. And talking about internal signal strength and the threshold for perception. The second part of my talk, I alluded to the importance of feedback connections that require that are required for figure ground segregation, suggesting that this idea of a global neural workspace might depend on the stimulus you're making. And then the last part of my talk, I talked about how we can use these ideas to create the visual cortical procedures for blind people. Now, last but not least, these are the people who get to work. So the early consciousness study was done mainly by Brown von Fürth, Bruno Danilo, and death for attack collaboration with the lab of Stefano panzeri and standard. The work on figure ground segregation was done by quite a number of people in the lab and they're listed here. And the work of the visual cortical for the visual cortical procedures is also received contributions from many lab members. Antonio was first in the in the lab of Eduardo now is in Amsterdam. And I also like to mention Bing Lee, who is also very active in this project. These are the funders down here. So thanks a lot for your attention. And this might be time to ask questions. Great. Thank you, Peter. It was a really nice talk and really interesting. I also like being the subject of it. So I, yeah, we don't have a lot of time for questions, but I think I will. I'll make one. So I was what it looks really interesting to me the the prosthetics you showed. But I was wondering if you predict having a lot of difficulties when going from where you're showing which are letters or figures with very high contrast compared to the natural scenes that people can see in the street. So how can you predict that this is going to work. So the nice thing is that you can do a lot of image processing. So the last picture that I showed with the guy who was waving that was done with the Kenny edge detector so very simple processing. But you can also use artificial intelligence to enhance your image and to focus on those objects that are relevant to behavior. So I think actually with the very sophisticated, sophisticated image processing algorithms that are now coming available. We not super difficult to give the essence of the objects that are relevant for the subject. Yeah, that makes sense. So that the where would you implement this because this would have to be done online right like while the person is viewing so it wouldn't need to be an extra processing unit. Yes, so there are now actually good algorithms that do even what is known as semantic segmentation. So what you do is you get object categories and then you find all the edges that belong to particular objects that can be done on a portable device online with very short delay. So something like that could probably work very well. It sounds really promising. Okay, so I think maybe I can ask you another question. So when you were when you show the recording so on a monkey. I was wondering if if you were able to access different layers of the courtesies and if you saw differences between them between like later five and later two three of the different courtesies. Yes, so we did some laminar studies. And in one of the studies we looked at figure ground segregation in monkeys. And what you find is, well one part was expected. And that is that if you get the default response from the alien it starts in layer four then quickly propagates to layer two three and later five then become active, maybe 10 to 20 milliseconds later. I'm completely unexpected. I think also previous studies found similar results. Then we looked at this also orientation process that is sensitive to the boundary between figuring ground. And that process seems to be only implemented in layer two three. Another layer five. Then you wait a little bit, and then you get the feedback from high visual areas that do the region filling. And we can fill in his very strong in layer five and layer two three. And it's not absent with a little bit weaker and layer four and layer six. We also did a study in which we looked at working memory. So you can actually get the neural correlator working memory primary visual cortex. And those effects were most pronounced in layer two three and later five and much weaker in layer four and later six. So we do get sort of laminar fingerprints of many of these cognitive processes in primary visual cortex. Okay, thank you. So we don't have any more questions in the chat at the moment so I'll just I'll just paste the link for the zoom meeting if any of you, if any of the viewers want to join the discussion and just feel free to join us. And also let me just check this. Well, of course, thank you for the for the talk again. My pleasure. So let's see. So I'll leave them. The recording in in in YouTube for a while because it's a bit delayed people may may join to the chat zoom chat zoom later. So I'll leave it, I'll leave it the recording for a while in YouTube. I did my PSD in the university in the University of Alicante. Okay, so you know Eduardo. I'm not sure if he maybe he joined later because he was a few years ago. Okay. I think there, there's also one in Elche. So, I think Eduardo is an Elche. It's in Elche there. I didn't know if you knew it but when I joined the, it was not I was not in Elche it was in, in Alicante itself so it's, it may be different, different places. Yes. So maybe I can ask you some other questions I had. So, when you showed the, you showed the part in the first part of your talk, you showed a model of the how the different parts of the cortex may interact between them. And then you add, then you added their, the parietal cortex that you said that they had a strong connection to the prefrontal cortex. I'm wondering if these parietal cortex, how the activity of these of parietal cortex may look, maybe it's similar to prefrontal or more similar to. Yeah, so I think that is something that Stun added to the model so he did not make any recordings there. But it's known that if you look at persistent activity for record memory. There might also be quite pronounced in parietal cortex, and more so than in ferritemporal cortex. But there might also be contributions from, from hippocampus or medial temporal lobe. And so that was just pure speculation so we don't have in this particular task, with any recordings but we know from previous work that if you make an eye movement plan there's also very strong persistent activity of the LIP lateral in the parietal cortex that plays a role in the planning of eye movements. So, I think it would be interesting to also record in this in this task in in areas that do not plan eye movements but that might have a more general role in keeping the working memory online. There are more modality aspects in specific manner so that you could read it out for making an eye movement or reporting it in another different manner because you might be closer to a people called phenomenal phenomenal awareness right it's the awareness of something without a necessity to report it. But that's something we did not do in the study. Okay. Okay, actually I was wondering when you. In some part, I mean not that I expert in this kind of experiment so I was wondering when you say that the monkey, it was not intentional. Is it that it's not being fixated on the, on the fixation point. Or we always check that the animal is looking at the fixation point he was tracing a curve so there was this fixation point and there was say this this one this is not square. And then and then to curse and on some of the days he was only caring about this curse that were up here. And on the other days he was looking for a square. But he was always fixated, because you know if the animals not fixating, you have these tiny receptive fields and they can be anywhere. So you have no control over where you're recording from. Yeah, that's true. Okay, that makes sense. Yes, yes, sure. Hey, Peter. So something I've been curious about with the center surround. And basically, you know, you sort of you always have like a rate code where something happens quite a bit after the initial precept. And so, one of the arguments in in sensory neuroscience is that rate code is not very useful, because you need to trans trans trans been important information quickly. So most of the important stuff coming from the retina is contained within the first couple of spikes. So I'm just curious like to say you're in the environment, and there's a snake close by. I mean, how your ability to segregate it from the background with with quite a large temple delay. Do you think that's like an issue? Or is it more sort of a it adapts to the predictiveness of your surrounds and update something quickly. So I mean, these are two questions. One is about rate code and the one is about going to be fast enough. Yeah. So it's, it's not in the synchrony for sure. I mean, we looked at that. And so we're, of course, the brain is more than one neuron. Yeah, every position is more than one neuron. So it's not a reliable rate code of one neuron. So if you if you have say 1000 or 10,000 neurons for every space position, it becomes actually quite a reliable code. But is it fast enough? I mean, you still need the delay to get the signal up and then back down. Oh, to get bigger grounds. Yeah. Well, it has to happen right somewhere. And if it's a complicated figure ground task, it takes time. So you can have efficient is for detector some personal some specific combinations maybe there you could you could afford to set aside some architecture for instance to detect snakes in a V4 manner. Right. And then you don't have to go back. But the thing is with the binding problem that you can also bind features for new objects in episode before. And I mean, we also did human psychophysics on it. It's just a very slow process, incredibly slow process. So of course, if you want to bind features of objects that you never saw before and they are kind of defined by local groupings. Then it can easily take you 200 milliseconds or so before you know that this and this is part of the same object. And we measure that then in psychophysics. So there's no doubt about that this is taking time. Yeah, 200 milliseconds. See snake maybe it's not even so bad right because these snakes are lazy animals. Yeah, that's just curious. So is it if you see something new is the binding longer. Do you have to make have to format association and then you retain some sort of residual association so it's faster in the future. Yeah. Yeah, definitely so we know that from from also psychophysics. And since we asked subjects. So you saw the zebra image we had we had other versions of that where we had images that say two horses or two scooters or two boats or whatever to two vehicles or two animals of the same time. Then we placed two dots, and we asked the subjects are these dots on the same animal or vehicle are they on different animals or vehicles. And that takes some time say it takes 600 milliseconds or so before you can respond of course there's also the response time and the time to activate your hand and all these things. But then we inverted these images so they're there were upset down. And then it takes more time. Oh, your experience with the visual worlds gives rise to object templates and these object templates are probably very helpful in binding. But if you then inferred it, then they are not there anymore or not as efficient anymore so then it takes longer. Thanks. Okay, there was a question I missed in the chat if you with my having a look. I'm not very clear what it's referring to, but maybe you can you see it in the chat. In the zoom chat, I pasted it. It's across brain regions. Yeah. But I'm not sure which figure it is about firing rates. Maybe I can. Yeah, if you can share again. Make a bit. The bus. This one. This one. Yeah. So, I mean, I think they were about the same so in the one and before we typically recorded no unit activity. With me I know from experience that if you look for the best sales they will go up to certainly for the icon traffic and go up to maybe a hundred spice per second or so. You probably get the same before and probably get the same in prefrontal cortex. So yes, in general, you probably get very similar firing rates in these three, three areas but I am and we did not make this direct comparison between firing rates because this was multi units so you also have some units that respond less. And actually, we don't know how much every single spike is firing because we just see the mixture of different cells. And also let's prefrontal cortex. We had typically single units. So we really isolated the unit although sometimes we had no units but we can definitely have also some recordings where we had only single units. And some of them fire 20 spikes per second, some of them fire in the spikes for seconds. So you have a whole variety. I don't think there are that large differences between the overall firing rights constructors that we recorded from. I mean I know that if you record from hippocampus. You sometimes have to do your very, very best to squeeze out 10 spikes per second. But maybe because you don't have our time finding the right stimulus. Of course, it's hard to know if you if you don't get a high firing rate. Okay, thank you. I'll stop the broadcasting.