 Okay, now we are live. Hello. Welcome everybody to another talk of the Sussex Visions seminars. I am Antonio Nojosa, a postdoc in Leo Lañado Lab and I'm working in most contexts. And this is, as you may know, this is a part of the Sussex Visions, sort of the Worldwide Neuro Talk that is an initiative put forward by Tim Boggels and Panos Bocellos. And the purpose of this initiative is to make greener a more accessible kind of talk. So we want to thank them for starting this initiative. And so just for you to have an idea, so we will go for a seminar for one hour total. It would be around 45, 50 minutes of talk and then at the end we will answer questions. And so I would like to encourage you to put the questions in the chat and I will formulate them to the speaker by the end. So, and I would also like to encourage people to join the channel if you don't want to miss the new talks that we're gonna have. So today we have with us the Nick Steinmetz. He's an assistant professor at the University of Washington in Seattle and is a member of the International Brain Laboratory. Nick studied bioengineering at the University of Pennsylvania and then he did a PhD in neuroscience at Stanford University, supervised by Terry Moore and Coabina Bogen. After finishing his PhD in 2014, he joined the laboratory of Kenneth Harris and Matteo Carandini as a postdoc at the University College London and is there where he contributed to the development of NeuroPixel technology as he will show us today. Since 2017, he's program manager for the NeuroPixel Consortium and he's also a next generation leader at the Allen Brain Institute. He started his laboratory at University of Washington in 2019 and there he investigates the neuropsychics and systems that underlie perception and cognition across the brain. So welcome Nick to the seminar and looking forward to hear from your talk. Thanks very much, Antonio. Thanks everyone for tuning in. Apologize in advance if you can hear some construction noise in the background. There's a building being built outside of our lab and not much I can do about it today but I hope it is done in a fairing. So welcome and I will give a talk today in sort of two parts. One first part about the large scale approaches which will of course we'd be about large scale electrophysiology with NeuroPixels arrays as Antonio mentioned and second part about the distributed circuits underlying visual decision making. So probably many of you in the audience have heard of NeuroPixels and probably the general concept that doesn't need too much motivation but here's how sort of I think about the need for large scale electrophysiology, large scale recordings of the brain. If we're thinking about any kind of behavior that involves reasonable complexity, involves motor actions, involves cognition, involves perception, we might immediately based on a huge history of literature investigating these different neural functions be thinking about rules for multiple different brain regions for instance parts of cortex, thalamus, basal ganglia, superior cliculus of campus, et cetera. And when we start to think about how all of these different regions might be working together in concert and a single behavior, we can immediately realize just based on anatomy that the problem is far from simple and that's because the connections are largely recurrent that is there are loops of information flow between and across these different structures. And when you have that kind of neural architecture, I'm not saying anything dramatic or novel here, when you have that kind of neural architecture that can create complex dynamics, it can create mixed signals and it can make it so that it's impossible to predict a priori, whether any particular node in this circuit will have any particular kind of representation unless you really knew much more detail about the exact pattern of synaptic connectivity than we really do. And so for me, this really strongly motivates the need to monitor the activity of neurons in all brain regions that we think might be involved in a behavior or are plausibly involved in a behavior simultaneously. And so that's what I'm gonna show you about in the second half of the talk and here I'll start with the technology that's going to enable us to do that. And that's the NeuroPixel technology. So the NeuroPixel 1.0 device was introduced in 2017, a project led by Tim Harris at Janelia and the device took advantage of CMOS silicon fabrication technology. That's the kind of silicon device fabrication that goes into making the chips in your cell phone to really revolutionize this particular style of recording brain activity, where you insert a thin probe into the brain that has multiple recording sites along it. And using advanced CMOS technology enabled a better device in two ways. First of all, the recording sites and the lines along the inserted shank that are required to get the signals off of the probe could be dramatically miniaturized such that we can record on a single shank that's only 70 microns wide from 384 sites simultaneously. And secondly, the CMOS technology allowed NeuroPixels to build in the data processing architecture onto the probe itself. So all of these 384 signals that are coming up off of the probe are amplified, filtered, digitized and multiplexed right on the device itself which helps lower noise and reduce interference. It also makes it feasible to record that many signals in a small device because after digitizing and multiplexing the signals they can be sent off to a computer on a relatively small and flexible wire path. And this is just a sort of zoom in of the tip and you can see that essentially the recording style that NeuroPixels has is like a... So each one of these black squares is a recording site and so you have closely spaced recording sites giving you a sort of tetrodes like improvement in spike sorting because each neuron can be detected by multiple sites and that's just continuous up the length of the shank. And a single NeuroPixels one point of device can record across about four millimeters of tissue simultaneously in the default configuration. So that was the NeuroPixels one point of device and as you'll see in the second part of my talk it allowed us to do some really I think novel kind of science but there were some problems that we wanted to try to improve upon. And so in particular we designed NeuroPixels 2.0 to be optimized for long-term stable recordings in mice that are still as large scale as in NeuroPixels 1.0. And the first design change that you can see is that the device is miniaturized. So this part of the device which is rigid and which lives near the brain, of course the shanks down here and starting to the brain this part of the device is miniaturized by about a factor of three. And likewise the head stage which is connected to that part by a flexible cable is also miniaturized by about a factor of three. Secondly the NeuroPixels 2.0 device has four shanks that can be inserted into the brain versus only one with NeuroPixels 1.0. And this is an advantage in multiple situations. For instance you can record more densely in a two-dimensional plane with NeuroPixels 2.0 than 1.0 it's useful for certain brain regions but in particular it's useful for this chronic long-term recording case because you just have many more sites accessible to record from day after day. So you still only record with NeuroPixels 2.0 from 384 sites at a time. But you can switch between and rather than with 1.0 switching between any 384 out of, not a selection of 384 out of 1,000 sites with NeuroPixels 2.0 you can switch between a selection of 384 sites out of 5,000 total accessible sites. And secondly the head stage even though miniaturized was now sort of upgraded to be able to collect data from two probes simultaneously. And this means that a single implant with two probes and one head stage all weighing just about one gram that is light enough for a mouse to easily carry around can stream 768 channels of neural data from amongst 10,000 available channels to a computer. And the last main design change with NeuroPixels 2.0 at least from a user's point of view is the recording site arrangement. So on 1.0 you sort of saw this picture on the last slide there was this sort of checkerboard arrangement of recording sites. 2.0 there instead vertically arranged in two columns and a little bit closer together. This is 20 micron spacing from row to row and this is 15 micron spacing from row to row. And I'll say in a few slides why we think that was an important design change. All right, so this is what they look like as I mentioned, this is a rigid part here sort of two probes back to back and you can see them flexibly connecting to a single head stage. And again, another one of these images is showing the recording sites along the probe and the dimensions here, which again are just like NeuroPixels 1.0 an individual shank is 70 microns across about 20 microns thick. All right, so the data quality is still as high as you're used to seeing or used to acquiring perhaps from NeuroPixels 1.0. So here's some example of raw data where you can clearly see a spiking activity well separated from the noise and also local field potential data from a selection of individual raw traces. And here are some spike sorted neurons where you can see that individual neurons are detected on multiple adjacent channels and multiple channels, multiple neurons that occur on overlapping sets of channels can nevertheless be well isolated from each other and spec sorting is high quality indicated by these clean auto correlograms and flat cross correlograms. All right, and so here is a data slide just showing the sort of full promise of NeuroPixels 2.0 for chronic recordings and freely moving mice. So this is data from Chateye Aiden and Seth Hesler's lab and what Chateye did was exactly as I described inserted one NeuroPixels 2.0 probe in the left hemisphere of the mouse and one in the right hemisphere of the mouse implanted them both chronically and made these recordings in a freely moving condition when the mouse was free to walk around at its cage. And what you're seeing here is spiking data from over 6,000 recording sites in this freely moving mouse. And as I mentioned, you can only record 768 channels at a time. So just to be clear, not all 6,000 of these sites were recorded simultaneously. Instead, Chateye recorded first from the lower part of shank one on the left hemisphere probe simultaneous with the lower part of shank one on the right hemisphere probe and then switched which recording sites he's recording from to switch to say the upper part of shank one and the upper part of shank two and then the lower part of shank two and et cetera, et cetera. So hopefully that switching scheme makes sense and the idea is you're not gonna record all 6,000 channels simultaneously but if you have this probe chronically implanted you can over day after day or even trial by trial record from all 6,000 or even all 10,000 of the sites if they're all in the brain. And I mentioned that the multi shank design is also useful for densely sampling site brain regions that are amenable to 2D sampling pattern. So here's data from Genchal Park in Josh Shedman's lab showing that a four shank probe inserted into the straight and can cover sort of about a three quarters of a millimeter by three quarters of a millimeter 2D patch and that enabled them to isolate in this particular example recording 150 neurons whereas with a single shank in our pixels probe they might have had something more like a quarter of that or less than half of that and that enabled them to observe these consistent each one of these is a spiked raster from an individual trial these consistent patterns of activity across trials. So basically high signal to noise measurements of population activity. All right, so how well do these recordings work in a chronic situation? So you may have seen a lot of papers doing fantastic work developing flexible probes or nano scale size probes like carbon nanotubes and the argument in favor of those probes is that it won't be possible is that it shouldn't be possible or the argument is that it's not possible to make a good long lasting recording from a rigid device and nerve pixels probes are rigid devices relative to brain tissue. Nevertheless, in terms of the time scale that we've been interested in for mouse recordings and rat recordings, which is about a year time scale you can see this plot goes up to day 309. We in fact do find that the recording quality is good across that time scale. So here you can see the pattern of spike the pattern of where spikes are found across the probe as a function of days. And you can see that we can stably record from say looks like layer five cortex here and say one pyramidal layer here and the land parents here we can stably record from all of these brain regions over that entire one year. And these are some spike waveforms from individual neurons recorded on that very last day day 309 just to illustrate that that signal to noise ratio is still high even on that last day. So on that time scale we think these probes are suitable for stable and sufficiently long-term recording. This is summarizing data across six different labs over just two months but we found that most recordings were stable across the two month time scale some recordings were slightly declining in these metrics total fine rate in your account. And you can see more details of this analysis in the paper. All right. And to make neuro pixels 2.0 probes still more useful for the chronic case Chattai and Seb along with some collaborators of theirs designed a fixture that enables chronically implanting the probe and then recovering it afterwards. And the way this particular fixture works is that you implant with cement one piece you insert the probe on a second piece and then later come back with a separate device that removes the top piece from the implant. And this is data just showing that implanting the same probe over and over again resulted in high quality recordings over and over again. All right. So I mentioned at the beginning this sort of subtle change in the recording site arrangement from a sort of check report pattern to the vertical columns. And I wanna now say why we think that's important and what problem we're trying to solve with that and why we think we've solved that problem. So the problem we were trying to solve was instability recording instability. And basically the problem is if you fix your probe relative to the skull and perhaps in an acute situation relative to a table or just relative to the skull in a chronic situation you have a problem which is that the brain is not fixed in this location relative to the skull. So the brain is free to move around relative to the skull even if your probe is not. And that could create motion of the brain relative to the probe. And we can actually observe that motion in the data. So this is a spike raster where the x-axis is time as you're used to. In this case, each spike is colored coded according to how large the amplitude was. And the y-axis is not neuron number rather the y-axis is where this particular spike was recorded along the length of the probe. And so you can see that there are sort of these sort of stripes of similar amplitude spikes across the probe indicating that one or a small number of neurons was recorded stably across this 15 minute segment. But that the different neurons appear to sort of move simultaneously with each other up and down the length of the probe. And you can see the same movement pattern at all these different depths along this segment of probe. So this is indicating that the brain was moving relative to the probe. In principle, of course the probe could have been moving we believe the probe was fixed. So the brain was moving relative to the probe. And so this creates a problem for a spike sorting algorithm. If you just naively treat each spike as if it were coming from a staple source then you'll think that spikes at the tip of this movement pattern are from a different neuron than spikes down here because they will be recorded on different sites. So how can we solve this problem? So we had the idea to use the fact that we're sampling continuously and as you can see with your eyes we can observe this pattern of movement. And we had the idea that we could use concepts from image processing. We could use image registration in particular to correct the motion. If we can estimate the motion and if we can reinterpret the data using our continuous sampling along the length of the probe then we could correct the motion and restore a stable signal post hoc in the software processing algorithm after the recording. And that process is dependent upon what your resolution of spatial sampling is. So the more spatial sampling you have along the dimension you wanna correct the better your correction can be. And therefore that's why we wanted to make the sites closer together and vertically oriented. So we have maximal spatial sampling, maximal spatial resolution along the length of the probe. All right, so here's how it works. And this algorithm was created by Marius Pettitariou at Chinelia. And so here's a original raw data where this upper left segment is the segment we were just looking at. And now we're sort of zooming out to see multiple brain regions and also a segment of the recording in which I was moving the probe up and down programmatically to induce some known pattern of motion so we could check how well our algorithm was. We had sort of ground truth known pattern of motion that we could check against our octagon. And the way this algorithm works is first estimating the depth, the change in position at each point on the probe and at each moment in time. So different parts of the probe, as you can tell, must be estimated differently. This part of the probe is moving separately from this part of the probe to the mechanical disconnection between these brain regions. But once that's done, you can simply re-interpolate the data. So here's data from a certain time during the recording where we see a particular spike. Here's data from a later time and the same recording where we see a spike that looks like it might be from that one, but shifted up. And by using the whole pattern of spikes, we can estimate that indeed the recording was shifted up by 53 microns. And we can correct that shift by re-interpolating the data to be shifted down by 53 microns. And therefore, we're now able to see that in fact, this spike really was apparently from the same neurons that spike. And when we apply this algorithm to the original raw data and rerun over the whole recording, we get back now a stabilized recording that is amenable to higher quality spike sorting. All right, and I will skip the slide that sort of quantifies exactly how well this worked on the basis of this ground truth motion and post test. And I will also skip how we showed that we can do this same process with chronic time scale recordings and not just sort of minutes time scale recordings, but you can check out the paper or ask me more about it afterwards. Let's see, I think that's all I have to say yet about NeuroPixels 2.0. So just to summarize, so NeuroPixels use CMOS technology to enable large-scale electrophysiology. Larger-scale fan was previously possible. NeuroPixels 2.0 are miniaturized and have over 5,000 sites on four shanks. And they're optimized for stable long-term recordings and with these new algorithms that Marius Pettitariou implemented and developed, we can correct for brain movement on short and long time scales. And I should mention, actually, if you're looking to use this algorithm on your own data, we do think it also works on NeuroPixels 1.0 data sets, perhaps not quite as well as on 2.0, that's quantified in paper as well. And what you want to use is the KillSword 2.5 release for the algorithm that we described in this 2.0 paper. All right, so that's NeuroPixels 2.0. And so now I'm gonna talk about how we applied NeuroPixels 1.0 to study visual decision-making across brain regions. So I wanna introduce sort of what we were thinking about going into this project and what kinds of behaviors we're interested in studying with this little video loop. And so in this video, you can see two deer and one of them appears to detect some sensory stimulus, not sure whether it's something visual or auditory and orient towards it. And then tries to make a decision about the nature of that visual stimulus. And you can see he sort of stopped chewing there for a moment, like as if in contemplation. And it's trying to make it really a life-critical decision is on the basis of all the sensory information I have available to me right now, is it necessary to alert my conspecifics and run away or is it fine to just ignore that stimulus and assume that it's non-threatening? And if you get that decision wrong, the consequences could be fatal. And so I think there's one interesting process, which is this sort of discrimination and decision moment. I think the other interesting process that we can see in this video is that moment of detection where we can see that while this deer detects the stimulus, this other deer seems oblivious to it, fails to detect the stimulus altogether. And so we can ask just a few really simple questions. Why are some stimuli perceived and others not? Why did the one deer perceive that stimulus and the other deer did not? Or why is a certain stimulus perceived by one observer on some trials and not observed by the same observer on other trials? And discrimination, how are multiple pieces of sensory evidence combined to make a choice? And so as you can imagine, we're gonna not be studying deer, making life critical decisions in the wild, but we're gonna try to boil down these two questions into something we can study in the laboratory in a repeatable and reproducible way and in a way that we can sort of quantitatively analyze. So to give an idea for the sort of intellectual background for why we took the experimental approach that we took, I want to point to this paper from Hernandez and Romo in 2010 that was really influential in my thinking. And in this paper, they employed the sort of classic task from the Romo lab in which monkeys are attempting to discriminate the frequency of a buzz to their fingertip, the details of the task are not so critical for what I'm gonna describe right now, but suffice it to say that there's a sensory stimulus, there's multiple sensory stimuli, the subject makes a decision about the sensory stimuli and executes a motor action during a reward, just like the task I'll tell you about in a moment. And in this classic series of studies summarized by this Hernandez paper, they recorded from many different brain regions and they were really, I think at the outside, at least thinking and looking for a serial process where you might have a sensory signal in one place, a decision taking place somewhere else and then an action being carried out by a third place. And I think that was totally reasonable, a way to think about the way such as the brain might solve such a sensory, perceptual decision-making process, but it turned out to be completely wrong, at least at the level of what we can observe from neural correlates. So in this plot, each row is one of those brain regions I told you about, each column is at a different time point during the task where you get information about the first stimulus, information about the second stimulus and where you execute your decision. And each dot in the scattered plots represents one neuron and the color is what kind of information that neuron represents. And so in S1, in primary sensory cortex actually, this area really matched their idea. You've just got information about the first stimulus, you've just got information about the second stimulus, you don't hold any memory of those stimuli, you don't have any representation of the decision or the action. Great, this is really your sensory processing area. But from there on, you can see that there's almost nothing distinguishing the coding in these different areas. Every subsequent brain region represents the first stimulus all the way down to primary motor cortex. Every brain region represents the memory of that stimulus, every brain region represents the second stimulus and also in Cyan, the sort of decision variable neurons, the ones that encode the difference between the two stimuli. And in fact, remarkably, even out here in M1 at the time of action execution, you still have red and green neurons. Red and green neurons are the ones that encode the first stimulus information and the second stimulus information which stopped being relevant a long time ago. So in cortex, in this task in monkey cortex in different parts of sensory motor and frontal cortex, it really appears that there's not a distinguishable single locus of the decision of the action of the sensation or of the memory in this case. And instead, it's a really highly distributed process amongst these cortical areas. And what we wondered was whether this was true across the brain or is it possible that, maybe the role of cortex broadly is the memory of the decision process that's going on in this task and that if you look subcortically, maybe we really could localize a particular decision or motor nucleus that was dedicated to one, a single aspect of this type of behavior or instead is the principle of sort of distributed encoding found broadly across the brain. So we designed a behavioral task that was similar in concept to the Romo task, but for mice. And in this task, mice are seated with computer screens surrounding them. We use these, of course, to show visual stimuli to the mice and the mouse has a wheel that can turn clockwise or counterclockwise to give a report about what it perceives on these screens. And we reward the mouse with liquid rewards when it gets correct answers to train the mouse and motivate the mouse. Finally, the mouse is head fixed during this task which enables us to both precisely control the visual stimulus that the mouse sees and also, of course, to combine the performance of this task with the kind of large scale recordings that I described in the first part of this talk. So what do the mice see on the screen and what should they do with the wheel? So if they see a stimulus on the left side of the screen, they should turn the wheel clockwise to bring that stimulus into the center of the screen. So when the wheel turns, the stimulus moves on the screen. If the stimulus is appearing on the right, then by the same logic, they need to turn the wheel counterclockwise, thus bringing the stimulus again to the center of the screen. So either way, the task is to turn the wheel such that the stimulus ends up at the center of the screen. Now, in some trials, there can be two stimulate once and the mouse should choose the higher contrast stimulus to bring to the center to get a reward. And finally, on other trials, there's no stimulus presented and the mouse should hold the wheel still for 1.5 seconds in order to earn a reward. And in this way, we can study both discrimination. This is a contrast discrimination task, but we can also study detection because the mouse must determine whether any stimulus is present versus no stimulus is present. And the mice become highly proficient at this task. So here I'm showing behavioral data from an example session from just trials with a single stimulus at a time where the right side of this axis indicates when there was just a stimulus on the right and the left side indicates when there was just a stimulus on the left. And these are the proportion of rightward choices that the mouse made. So these are showing that when the stimulus is indeed on the right, the mouse is in fact normally making a rightward choice, which is correct. And when the stimulus is on the left, the mouse is correctly making the leftward choices most of the time. And when there's zero contrast that is no stimulus is presented, the mouse correctly makes that no-go response by holding the wheel still most of the time. And you can see this mouse performing very well. There's just a small proportion of misses, which are these trials with a no-go response when a stimulus was in fact shown and false alarms, which are trials with a right or left response when no stimulus was shown. All right. So the mice can perform this task well. And here's a summary of the full decision behavior for all the mice that are included in the study. And here showing all the contrast conditions for zero, low, medium, high contrast on the left, zero, low, medium, high contrast on the right. And you can see that not only are they correctly performing the detection aspect of the task and giving the no-go response only, primarily only when there's zero contrast on both sides. They are also performing the discrimination aspect of the task, trading off their left and right responses as the contrast of those two stimuli is closer to each other. All right. And so then we combined performance of this task with neuro-pixels recordings. And we used two or three neuro-pixels probes at a time to record multiple brain locations while mice performed this task. So here you can see a mouse, he's got the wheel that he's turning and the stimuli, this is showing what the mouse sees on the screen, stimuli appear, turns the wheel to bring the stimulus in the center of screen and gets rewarded. And while the mouse is performing this task, you can see we're recording, this is a neuro-pixels probe inserted this one into the frontal cortex. And there's another one behind it that you can't see that's inserted into visual cortex and the structures below. So here we're recording from parts of frontal cortex, singular preliminus, singular motor, and parts of primary visual cortex, parts of hippocampus and parts of thalamus all simultaneously. And so this is enabling us to actually observe the complex dynamics in all of these areas at once and see how they are coordinated, how the information representations in different parts of the task in these different areas relates to each other. And you can see that there's a lot of interesting complexity here even at a glance. There's some sort of local dynamics, for instance, at this moment in primary visual cortex, but there's also some shared dynamics across brain regions at certain times. So that's what we wanted to understand. And in the course of this study, we recorded from 10 mice and over almost 30,000 neurons from those 10 mice. And these are the recording locations on a 3D volume of the mouse brain in Allen comic coordinate framework space. And you can see that even in a relatively small number of subjects, we were able to record actually a sort of significant fraction of the total volume of the mouse, at least four brain and mid brain. We did not record cerebellum and high brain in the study. And here are the brain regions we did record. So we have in cortex, primary visual cortex and a selection of the secondary visual cortical areas. We have frontal cortical areas like MOS, that's secondary motor cortex, PLP limbic. We have primary motor cortex, some sensory cortex, retrospinia cortex. And then subcortically, we're recording from parts of the basal ganglia in purple. This one's CP, the caudal containment, that's astratum, the input structure of the basal ganglia. And SNR here is the substantia nigra parts of ticulata, one of the output structures of the basal ganglia. We have the hippocampus in pink. We have thalamus and green, including visual, thalamic nuclei, LP, LD and LGD, which is out of slice here. And mid brain areas like the superior cliculus, both deep layers and superficial layers shown from the top view over here, as well as areas like the mid brain reticular nucleus, below the SC and zona inserta, and nebitorial nucleus adjacent to the thalamus. So we really cover a large number of interesting regions that have been implicated by various studies as having certain kinds of roles in association with a task like this one. But to our knowledge, of course, these areas had never been studied simultaneously in a single task and thus a comparative analysis of their coding in a perceptual decision making task had not previously been undertaken. So to see, to start understanding what we see about the neural encoding of this task across the brain, let's start with just an example neuron. So this neuron was located in the deep layers of primary visual cortex. And this is the waveform of that neuron as we reported it on the probe, just showing isolated single neuron. And here's the spiking activity of this neuron during the task. And here, each row of this raster is an individual trial and it's split up by high, medium, or low contrast trials by the contralateral stimulus or no contralateral stimulus at all. And you can see this neuron is responding robustly following contralateral stimuli. And in particular, it responds more for high contrast stimuli and also at lower latency. So this is a nice classic primary visual cortex type of neuron. This is what you might expect from primary visual cortex. And so it's no surprise I think to anyone that neurons in V1 have visual responses robustly after stimulus onset in this task. And so we wondered what other brain regions have robust visual responses after stimulus onset in this task. And to start to visualize that, I'm going to use a different visual representation of the data where I'm gonna take each neuron and its its average activity over time across conditions. And I will plot that as a row in a color map. So going from low to high to medium activity here is one row in this color map going from white to black to gray. And so each row represents a different neuron, all of these neurons from primary visual cortex and these neurons from secondary visual cortex areas. And again, I don't think anyone will be too surprised for me to tell you that both primary and secondary visual cortical areas respond robustly following stimulus onset in this task. But what about when we zoom out and look across the brain? So here I'm showing the activity from frontal and motor cortex, from hippocampus, from parts of the basal ganglia, from thalamus and from mid-pain and showing that there are neurons in every one of these regions that are responding robustly following stimulus onset in this task. So this already was a surprise to us. We didn't have hypothesized roles for many of these regions during this task. And so the fact that there are neurons there responding following stimulus onset was surprising to us. And the next question is, what aspects of the task do these different neurons represent? How can we understand whether they are really visual representations or do they perhaps represent something about the decision or the motor process? And we can understand that by breaking down neural activity according to the different conditions that we have in this task. So we have both correct and incorrect choices here. Correct choices and misses have the same stimuli presented, but whether a contralateral or an extralateral stimulus, but the behavior of the mouse is different on these two groups of trials. And we also have passive trials where the same stimuli are presented, but we're outside the context of the task altogether. The mouse has no opportunity to earn a reward by responding to the stimuli in this condition and just passively views the same stimuli presented sort of replay to the mouse after performing the task. And so for this neuron in secondary visual cortical area, VISPM, the posterior medial visual cortex, you can see that this neuron has early burst of activity following contralateral stimuli, regardless of whether the mouse correctly detected and responded to it, response times in black and this neuron that additionally has additional firing after the response, or whether the mouse missed the visual stimulus that failed to respond to it, or whether that same visual stimulus was just presented in the passive context. And so this kind of response where to put it exactly, precisely, you can predict the activity of this neuron by knowing the visual stimulus timing and identity, but this activity is not predictable by just knowing the action of the mouse. This kind of visual response was observed in what I would call largely the classic visual pathway. So this is primary and secondary visual cortical areas, superficial layers of SC that receives direct retinal input. We have visual phylamid nuclei, LPLD here. We have a few areas that receive direct inputs from visual cortex. So the secondary motor cortex receives direct inputs from V1, the striatum receives direct inputs from V1. We also have a few midbrain nuclei that have not been described as part of the classic visual pathway, and those are perhaps interesting, but by and large, so for instance, you can see somatosensory cortex, not having visual responses, hippocampus not having visual responses, many of these salamic nuclei not having visual responses. So by and large, this is what I would call the classic visual pathway, showing these visual type responses in this task. All right, so I'm gonna change gears for just a second, and in a moment you'll see why I'm doing this, to talk about a parallel set of data collection we did in this exact same task with wide field calcium imaging. So wide field calcium imaging involves mice that transgenically express the G-Camp fluorescent calcium indicator, and we're actually observing the activity across all of cortex, from visual cortex after frontal cortex, transcranially through the skull of the mouse while they perform behavioral tasks. And using this modality, we get a measure that's not single cell specific, of course, we're not observing individual neurons here, but we have the sort of complete pattern of activity, population level activity across all of the different cortical regions. And here I'm showing another map of the visual encoding that I just showed you, again, showing individual neurons that have visual encoding in primary and secondary visual cortical areas, and a few up here in frontal cortex, but largely not in somatosensory and motor cortex, for instance, spiral cortex out here. And I'm showing you that this map is recapitulated precisely in a map that we compute of the visual encoding from the wide field calcium imaging. So now we can really densely sample across space and we can see that are reasonably dense, but also somewhat sparse, sampling with the neuro pixels recordings is recapitulated exactly with the pattern of visual encoding that we see in the wide field calcium signals, again, from primary and secondary visual cortical areas and from secondary motor cortex. So, all right, so, now I'm gonna tell you about the next kind of correlatastic correlate that we observed in the spiking data. And then again, we'll come back to the wide field data and this will all make sense, I promise. All right, so in this type of neuron, this is a neuron from the subiculum region of the hippocampus, and this type of neuron you can see does not respond with shortly and see after visual stimulus onset and doesn't really respond at all on the missed trials or the passive trials, but does respond around the time when the mouse is executing the movement in this task. And that's actually regardless of whether it's a contralateral or ipsilateral, a clockwise or counterclockwise wheel turn. And if we align down here to the movement onset rather than stimulus onset, you can see that this activity is precisely aligned to the movement onset and actually precedes it just a little bit. So this type of activity where there's activity that's preceding leading up to movement onset, but independent of the type of movement it is, whether it's a clockwise or counterclockwise wheel turn, we call this action encoding. And we observe this type of encoding really widely spread across the brain. So we observe this in essentially every region we recorded from. I should have made clear the white regions here we didn't record from, the gray ones. So olfactory regions here is one region where we did not observe this kind of action encoding, but we observed this action encoding in visual cortex, in frontal and some out of sensory cortex. We observed it in hippocampus in nearly all phylamic regions. We observed it in all the basal ganglia regions we recorded and all the mid brain regions we recorded. So this action encoding is really widespread at prevalent across the brain. And I think it's worth keeping in mind if you're studying a go-no-go task or interpreting data from a go-no-go task in which I think you can assume that nearly every brain region is going to encode something about the action initiation and the task that's maybe not specific to the particular decision in a go-no-go task. And so in our test design, we have sort of this combination of the two alternative force choice and the go-no-go which enables us to see that this type of response is really not about the decision of the specific decision about the visual stimulus. It's just about the motor execution. And that's why we call it an action encoding rather than a choice encoding. All right, so here's just a movie of that action encoding developing across the neurons that we recorded spiking activity from in cortex. And you can see again, just like I just showed you that this is really observed in neurons in all of the cortical regions that we recorded. And again, this sort of what we call global action initiation coding is mirrored in our wide field calcium imaging and we can observe the exact same phenomenon that essentially every pixel that we recorded in our wide field calcium imaging has this coding of the action initiation has a change in activity that can be predicted by knowing when the mouse starts to turn the wheel. All right, so we wanted to know is this, does this actually mean that all parts of cortex are causally involved in this task? Or is it the case that some of this action encoding or even all of it is not causally relevant to perform the task and instead it's maybe some kind of corollary discharge or some other sort of phenomenon that is not necessary for performing the task. And so to ask that question, we took this approach that's sort of a now a common approach where you have a different transgenic mouse that expresses chain redoxin in the parvalbumin fast-biting inhibitory interneurons. So then by shining blue light again through the skull you can activate the inhibitory neurons and thus suppress the surrounding excitatory neurons. And this allows us to inactivate many different locations in cortex from one trial, a different location from one trial to the next. And here's just electrophysiological data showing that we can inactivate neurons across the depth of cortex and that our resolution is about one to two millimeters. And here's the result that we observed when we performed this inactivation experiment and looked at the change in choices when inactivating different corkularies. And so you can see that there, when you inactivate left visual cortex you get a decrease in rightward choices, this blue color. And this makes sense because left visual cortex represents the contralateral visual stimulus, represents the right stimulus. And so it's as if you're sort of blinding the mouse or preventing the mouse from recognizing that rightward visual stimulus. But not just visual cortex, we also get the same effect in secondary motor cortex up here in frontal cortex. But we did not see similar effects in somatosensory cortex, primary motor cortex or retrospectional cortex. And so I think what you can now appreciate is that the inactivation effect that we observed, which again was localized to visual cortex and secondary motor cortex, very precisely reflects the location of visual encoding which was in visual cortex and secondary motor cortex, but not the distribution of action encoding, which was again, global across cortex. And so in particular the areas with highest action encoding, which are somatosensory and motor cortex, did not have any effect of inactivation in our study. So we believe it's the case then that the presence of the visual encoding is what determines whether a particular cortical area is going to be causally necessary for a performance of a task, at least in this particular task. So I've talked about stimulus encoding and action encoding, that is a non-specific action encoding. What about a specific action encoding? What about a sort of action selection signal or what we might call a choice signal? What about neurons that discriminate between when the mouse makes a left or a right choice, clockwise or counterclockwise bell turn? So we did find such neurons. Here's an example neuron in the deep layers of the sphere calculus. And you can see that it increases activity leading up to contralateral choices and decreases activity leading up to ipsilateral choices. This zero here is the time of movement onset. So this activity precedes movement onset by about a hundred milliseconds. Here's a similar neuron from secondary motor cortex up in frontal cortex that prefers instead ipsilateral choice of trials. Here's a neuron in the striatum that has a clear choice preference. Here's MRN, the mid-brain reticular nucleus. And here's one in the zona inserta Zi, which has a very strong choice preference. So we did find such neurons, but they were quite rare. A much lower percentage of neurons had this type of encoding than the other types of encoding I described, both in terms of which brain regions these neurons are found in and also in terms of the percentage of neurons in those brain regions. So here is a summary of a few more of these neurons. And I want to illustrate with these example neurons that we observed a striking pattern about the types of choice encoding we observed in different brain regions. So these top four neurons here are all from the mid-brain, mid-brain reticular nucleus and deep layers of SC. These bottom four neurons are all from the forebrain, secondary motor cortex up in frontal cortex, and CP, again, that's cognitive reticulum, the striatum, the dorsal striatum, the input nucleus of the basal ganglia. And so what we can see is that, first of all, all of the mid-brain neurons prefer orange trials over blue. That is, they prefer contralateral, relative to ipsilateral. And whereas about half of these examples and about half of the total population of forebrain neurons instead prefer salarital choices. Secondly, we can observe that for mid-brain neurons, it's very common for the neuron to be suppressed below its baseline activity for its non-preferred choice. So three of these four are, this one's not, whereas for forebrain neurons, they were almost never suppressed below their baseline activity for the non-preferred choice. Instead, you can see the preferred and non-preferred choices both involve an increase in activity relative to baseline. It's just a greater increase for one than the other. And so in this way, we were able to delineate sort of two choice encoding types. One that we call bilateral, which is this encoding in which activity increases for both choices, and also neurons can prefer either of the two choices. And that encoding was found in the forebrain, in frontal cortex, and in the basal ganglia and the laminocleia. And secondly, the unilateral encoding of choice, which is areas that in which all neurons or nearly all neurons prefer the conflato choice, and many neurons are suppressed below their baseline activity, as if there's a sort of push-pull relationship, a competitive relationship between the two choices. And that we observed in the midbrain areas, SC, MRN, substantial negra, press, reticulata, and CI. All right, so let me just summarize. So neuro-pixels, CMOS probes have enabled unprecedented scale of recordings across the brain without loss of resolution. So single neurons, single spike resolution. And I told you about neuro-pixels 2.0 and how they enable these long-term stable recordings from 10,000 sites and freely moving mice. And then I told you about some principles that organize behavioral coding across both cortical and sub-cortical brain regions in a visual perceptual decision-making task. So specifically actions, action initiation is encoded globally, but these representations are not all necessary for at least the particular task we studied, or I would argue likely for visual tasks in general. I did not tell you about engagement, but I wanted to point it out here because there's a sort of a third part of the paper that you can go read about if you're interested in this aspect. So we observed something striking, which is that cortical neurons are suppressed when the mouse is more engaged, but sub-cortical areas are more activated when the mouse is more engaged in the task. So that's a state-dependent change in the activity that differs between cortical and sub-cortical regions. And finally, I showed you that choice is encoded in the midbrain unilaterally, but in the forebrain bilaterally. And I didn't get to tell you that we looked at the timing of the choice-related information in all of these areas and were unable to discriminate the timing of the choice-related information in these areas, suggesting that the choice itself is formed simultaneously across this interconnected loop of areas. And that's our working hypothesis for what is the locus of a decision formation in the mouse in this task. Okay, so I wanna thank Taya Kearney and Kenneth Harris, who are my supervisors in this work at University College London. Peter Zakahas was a really incredibly talented graduate student who worked with me on the neuro-pixels 1.0 recordings and the White Field Counseling Imaging and Oxygenetic Experiments. And Tim Harris led the neuro-pixels consortium from the outset. So thanks, Tamp, for making all of this possible and thanks to funding sources and I'm happy to take any questions. Okay, great, Nick, thank you so much for such an interesting talk. It's a lot of data, it's really interesting. So we have a couple of questions in the chat. I'll just read them to you. So we have from Jaffar Dos-Mohamedi. How much is the size of the data per minute for all channels on neuro-pixel 1.0? And how do you handle this large scale of signal? Yeah, it's actually not as bad as you might think. If you're used to calcium imaging, for instance, you may actually have a higher data rate than neuro-pixels probes. The White Field data I showed you is a much higher data rate than the neuro-pixels probes because even though each channel is sampled at 30 kilohertz, there's of course only about 400 pixels that you're sampling from. So the total data rate is about a little over a gigabyte per minute, so about 80 gigabytes per hour for a neuro-pixel 1.0 pro. And so it's, you can record for many hours without filling up a standard hard drive and kill a sword. The spike sorting software that we developed can easily process data sets of that size. And so, yeah, I think that we have the tools and the hardware necessary to deal with data of the scale without too much trouble. I don't think that'll be a limit for your experience. Okay, good, thank you. So I have also a couple of questions. So I've seen how you are able to correct the drifting that happens in a short scale. I assume that's maybe due to locomotion or other movements of the mouse, right? So I was wondering more in the long term, if there is a drifting of the probe due to the tissue changing or how would you be able to correct this? Yeah, so first of all, even in the cheap recordings, I think you get both kinds of drift. You have both these short, very fast drift that's related as you say to movements of the mouse, but also there is like tissue relaxation or sort of even inflammation processes on the scale of say tens of minutes that can also affect short recordings. And this algorithm is fine to handle both timescales. It will in fact probably do better with the slow timescale than the fast timescale. But across days, the problem can be a bit different in that you can often have, if you're not recording continuously 24 hours a day, then you might leave one day and come back the next day and everything's shifted by a sort of a step change. And that can make it a little bit trickier to find the right shift. But that's what we did basically is to do a single shift to align the two days at the intersection point and then apply the algorithm across those two now sort of shifted and realigned recordings. And so that's described in the NERB pixels 2.0 paper. We think that it works well across days. I didn't give the figures, but we think that we can track successfully about 90% of neurons across two weeks of recordings and about 80% of neurons across two months of recordings. So that's tracking individual neurons across that time. Okay. We have more questions now in the chat from Joshua Solomon. What about distinguishing the timing of choice-related activity from the timing of stimulus-related activity? How would you do that? Yeah, so good question. So the way that we quantified the stimulus and action-related activity was with a, what we call a kernel model where we're trying to predict the activity of each neuron on each trial by fitting a time course of response that that neuron has locked to the stimulus and a different time course of activity that that neuron has locked to the action initiation. And then the assumption is that, well, the effect is that we're sort of taking advantage of the reaction time variability to parse out which parts of the neuron's response on any trial might be due to the visual stimulus per se versus due to the movement per se. So if the activity is better explained by activity that's locked to the movement onset because on trials with long reaction time, that activity will be later and on trials with short reaction time, that activity will be earlier, then that will show up in the model as an action-related correlate either action or what are you called action or choice, which is to say either action initiation or action selection versus if it instead is just locked to the stimulus onset and happens at a fixed latency relative to stimulus onset then that's a stimulus locked response. So the model takes into account both the reaction time variability as well as the different stimulus conditions. So the fact that they can sometimes make a clockwise choice when there's no counterclockwise, no control at all stimulus and sometimes they can make that same choice when there's a high contrast control stimulus. And so those different conditions again, enable the model to sort of pull apart the different contributions. Okay. Okay, we have a couple more questions from Goki Okazawa. Do you think you found bilateral preferences in cortis because the animal used both our arms to move the wheel but then why is unilateral in midbrain? What is expected? Yeah, great question. And I completely agree with that line of thinking. So we definitely had the expectation that if we were recording from a set of neurons that was purely involved in pulling muscles on sending signals directly to muscle groups, then we ought to see bilateral preferences from those neurons because the mice usually use both forearms to push the wheel as you saw in the example video and so they would have to be signals sent to both the left and the right forearm for each of the choices. And so that actually is probably the best evidence that we have that the choice signals that we see in the midbrain are really about the decision of which stimulus was present and not about just a purely motor function to begin with. But that's a question that we're following up with because this task in my lab now because this task was not optimally designed to discriminate something that's say a motor response versus a decision per se. Okay. Oh, you muted yourself Antonio. Oh, sorry, sorry. So yeah, we're gonna go for the last question. Thank you. By Torfis, how do you compare how choices are represented at the population level within different brain regions? Are there more regions that encode for choice if population responses are examined? Yeah, great question. So we did do a population decoding and we did the population decoding for the specific purpose of trying to get the most sensitive readout of the timing of the choice information that we could. So when we did the population decoding analysis we were looking at the time course of the population decoding of choice in the mid brain nuclei in the basal ganglia nuclei and in the frontal cortex regions that we observed choice signaling in. So I actually did not do and the result there, I think I mentioned before was that the timing, the time course of those responses was similar. The time course of the population representation of choice was similar in all of those regions. And you can see that in I think figure four in the paper. I think I did not do a population decoding of the regions where we did not find individual neurons. And so I think that's actually a really interesting question. The dataset, by the way, is freely available. You can download it online. And if you wanted to try, for instance, sort of different nonlinear decoding strategies or other kinds of interesting decoding strategies that might reveal something in the population that is not visible on a single neuron level. I think there's some possibility that you could find something there. Certainly a linear decoding would require that individual neurons have at least some signal about the choice. Of course, it could be the case that in individual neurons the choice signal was too low of an amplitude to be detected in any one individual neuron but at the population level you can never less detect it. So that kind of thing is possible. Yeah, we didn't look for that. So good question. Okay, thank you, Anik for answering the questions. And so, yeah, we don't have any more questions on this about time to finish. So I would like also to thank everyone that has made questions and have been watching us. And please join us during the discussion that we will have right now after the meeting. So I put the link for the Zoom meeting if you want to join. Thank you. Thanks. Yeah, great. So I'll still keep broadcasting because when I close they will stop seeing the chat. So it was better if I keep it open. Let's see if people may start joining now. So while people come, so if now there is the option because YouTube by default they will have it online during one day so people can access freely but then there is the option either we keep it there for everyone to see or we can remove it as you prefer. Or we can keep it, you know because sometimes people have some information that they don't want to be published yet or but most of the people they... Yeah, now as you saw, this was all published work. So it's fine to put it up, yeah. It's fine, okay, good. Yeah, thank you. Yeah, we were, yeah. Yeah, luckily the Netflix was 2.0 and the Optigenetic and Whitefield study both came out this year, so I'll publish recently. Yeah, great. Okay, so I have a... Well, if people come, have a couple of questions if you don't mind. Yeah, go for it. So first because at some point you showed that you had to put two different neuro pixels probes. So I was wondering how easy it is to access certain structures at the same time given that the probe is cannot be bent so much if you have final limitation in that and if you've thought about the possibility of generating probes that are bent somehow. Yeah, I get this question kind of a lot actually about whether you can bend the probe in the brain somehow. Yeah, I think it's an interesting idea. I don't know how be achieved in practice. It sounds really tricky to me and you cannot by any mechanism that I'm aware of cannot do this with neuro pixels probes. So the neuro pixels probes are straight and you just have to pick your insertion carefully. So we have Andy Peters in the Candida Harris Lab wrote a really nice little GUI that allows you to sort of choose a target. You can sort of like pick an angle and it tells you which brain region so you're gonna hit at that angle. And basically what you have to do is just try to design, try to think carefully and design your insertions to get the structures that you wanna hit. For any pair of structures, for almost any pair of structures you can probably design a single insertion that hits both of them. But if you start wanting to record from three or more structures simultaneously then you're gonna start requiring maybe Gs2 or more probes simultaneously. Fortunately the probe itself is thin and so it's really not a problem to put multiple of them in the brain at once. I in fact did a recording to the eight probes simultaneously inserted in one day and this was, that's more of a challenge but still doable. And so that really should indicate that two or three is no problem. It's more of a challenge because I guess once you have many they may interfere with each other. Exactly, yeah, it's just a geometrical train. Yeah, that's it. Okay. But still, yeah, it was good. The data you showed was good and even though you have two, the data you showed you had two. Oh, there's someone is joining. It's a bit delayed, so maybe, you know, takes time for people to do, yeah. Sure, sure. Yeah, so I had another question. So about the, because when you showed them you inactivated different regions, activating the PVs. So I was wondering because you showed that there is more of a change in the preference of choosing the different size in the more, in the visual regions more than in prefrontal cortex. That was a bit striking to me because I was, I would have expected that prefrontal cortices may be more important in decision-making. But yeah, on the other hand, the others, you need the primary regions to communicate to prefrontal cortices. So yeah, I think that that's, it's hard to, it's a bit hard to interpret to directly the magnitude of the effect at least from that one plot. We did though do some really extensive sort of modeling in the paper where we tried to actually take even the wide field data that we recorded and try to predict the activation results from the wide field data. So in other words to say like, if we assume that the mouse is making his decision, his or your decision based on the signals from the cortex that we can see in wide field and it's linearly combining those signals and checking whether it crosses some threshold, then what if we were to delete one of those signals and keep the same decision model? Can we understand what differences in the choice the mouse would now make? And in fact, those, that kind of model actually fits in activation data very well. And so we think we actually are able to go from, go from the detailed of the detailed wide field information to the details of the optogenetic inactivations. But that single plot that I showed wasn't sort of capturing the full detail. It was just a single contrast condition basically just to summarize the locations where we saw effects. Okay. Yeah, I didn't read the paper, I'll have a look. Yeah. Yeah, I think it's a pretty nice little model and the paper's not too long, so it might be All right, what are your thoughts on how to analyze these vast amounts of data, I assume? Yeah, I think, yeah, we've been doing definitely a lot of, well, so one analysis, yeah, and instead examining them at the population level, yeah, so one analysis that we've been doing a lot of in my lab now and that we did do some of in this paper as well is sort of reduced rank regression analysis. So the idea is you're trying to predict one thing from another.